pax_global_header00006660000000000000000000000064145042637450014524gustar00rootroot0000000000000052 comment=d050cc9004961851442b232b6ffe0a1d337167f7 tdigest-1.4.1/000077500000000000000000000000001450426374500131725ustar00rootroot00000000000000tdigest-1.4.1/.github/000077500000000000000000000000001450426374500145325ustar00rootroot00000000000000tdigest-1.4.1/.github/workflows/000077500000000000000000000000001450426374500165675ustar00rootroot00000000000000tdigest-1.4.1/.github/workflows/ci.yml000066400000000000000000000007361450426374500177130ustar00rootroot00000000000000name: make installcheck on: [push, pull_request] jobs: test: strategy: matrix: pg: [16, 15, 14, 13, 12, 11, 10, 9.6] name: PostgreSQL ${{ matrix.pg }} runs-on: ubuntu-latest container: pgxn/pgxn-tools steps: - name: Start PostgreSQL ${{ matrix.pg }} run: pg-start ${{ matrix.pg }} - name: Check out the repo uses: actions/checkout@v2 - name: Test on PostgreSQL ${{ matrix.pg }} run: pg-build-test tdigest-1.4.1/.gitignore000066400000000000000000000000771450426374500151660ustar00rootroot00000000000000.deps/ results/ **/*.o **/*.so regression.diffs regression.out tdigest-1.4.1/LICENSE000066400000000000000000000017011450426374500141760ustar00rootroot00000000000000Copyright (c) 2019, Tomas Vondra (tomas.vondra@postgresql.org). Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies. IN NO EVENT SHALL $ORGANISATION BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF TOMAS VONDRA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. TOMAS VONDRA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND $ORGANISATION HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. tdigest-1.4.1/Makefile000066400000000000000000000007421450426374500146350ustar00rootroot00000000000000MODULE_big = tdigest OBJS = tdigest.o EXTENSION = tdigest DATA = tdigest--1.0.0.sql tdigest--1.0.0--1.0.1.sql tdigest--1.0.1--1.2.0.sql tdigest--1.2.0--1.3.0.sql tdigest--1.3.0--1.4.0.sql tdigest--1.4.0--1.4.1.sql MODULES = tdigest CFLAGS=`pg_config --includedir-server` REGRESS = basic cast conversions incremental parallel_query value_count_api trimmed_aggregates REGRESS_OPTS = --inputdir=test PG_CONFIG = pg_config PGXS := $(shell $(PG_CONFIG) --pgxs) include $(PGXS) tdigest-1.4.1/README.md000066400000000000000000000523011450426374500144520ustar00rootroot00000000000000# t-digest extension [![make installcheck](https://github.com/tvondra/tdigest/actions/workflows/ci.yml/badge.svg)](https://github.com/tvondra/tdigest/actions/workflows/ci.yml) This PostgreSQL extension implements t-digest, a data structure for on-line accumulation of rank-based statistics such as quantiles and trimmed means. The algorithm is also very friendly to parallel programs. The t-digest data structure was introduced by Ted Dunning in 2013, and more detailed description and example implementation is available in his github repository [1]. In particular, see the paper [2] explaining the idea. Some of the code was inspired by tdigestc [3] and tdigest [4] by ajwerner. The accuracy of estimates produced by t-digests can be orders of magnitude more accurate than those produced by previous digest algorithms in spite of the fact that t-digests are much more compact when stored on disk. ## Basic usage The extension provides two functions, which you can see as a replacement of `percentile_cont` aggregate: * `tdigest_percentile(value double precision, compression int, quantile double precision)` * `tdigest_percentile(value double precision, compression int, quantiles double precision[])` * `tdigest_percentile_of(value double precision, compression int, value double precision)` * `tdigest_percentile_of(value double precision, compression int, values double precision[])` That is, instead of running ``` SELECT percentile_cont(0.95) WITHIN GROUP (ORDER BY a) FROM t ``` you might now run ``` SELECT tdigest_percentile(a, 100, 0.95) FROM t ``` and similarly for the variants with array of percentiles. This should run much faster, as the t-digest does not require sort of all the data and can be parallelized. Also, the memory usage is very limited, depending on the compression parameter. ## Accuracy All functions building the t-digest summaries accept `accuracy` parameter that determines how detailed the histogram approximating the CDF is. The value essentially limits the number of "buckets" in the t-digest, so the higher the value the larger the digest. Each bucket is represented by two `double precision` values (i.e. 16B per bucket), so 10000 buckets means the largest possible t-digest is ~160kB. That is however before the transparent compression all varlena types go through, so the on-disk footprint may be much smaller. It's hard to say what is a good accuracy value, as it very much depends on the data set (how non-uniform the data distribution is, etc.), but given a t-digest with N buckets, the error is roughly 1/N. So t-digests build with accuracy set to 100 have roughly 1% error (with respect to the total range of data), which is more than enough for most use cases. This however ignores that t-digests don't have uniform bucket size. Buckets close to 0.0 and 1.0 are much smaller (thus providing more accurate results) while buckets close to the median are much bigger. That's consistent with the purpose of the t-digest, i.e. estimating percentiles close to extremes. ## Advanced usage The extension also provides a `tdigest` data type, which makes it possible to precompute digests for subsets of data, and then quickly combine those "partial" digest into a digest representing the whole data set. The prebuilt digests should be much smaller compared to the original data set, allowing significantly faster response times. To compute the `t-digest` use `tdigest` aggregate function. The digests can then be stored on disk and later summarized using the `tdigest_percentile` functions (with `tdigest` as the first argument). * `tdigest(value double precision, compression int)` * `tdigest_percentile(digest tdigest, quantile double precision)` * `tdigest_percentile(digest tdigest, quantiles double precision[])` * `tdigest_percentile_of(digest tdigest, value double precision)` * `tdigest_percentile_of(digest tdigest, values double precision[])` So for example you may do this: ``` -- table with some random source data CREATE TABLE t (a int, b int, c double precision); INSERT INTO t SELECT 10 * random(), 10 * random(), random() FROM generate_series(1,10000000); -- table with pre-aggregated digests into table "p" CREATE TABLE p AS SELECT a, b, tdigest(c, 100) AS d FROM t GROUP BY a, b; -- summarize the data from "p" (compute the 95-th percentile) SELECT a, tdigest_percentile(d, 0.95) FROM p GROUP BY a ORDER BY a; ``` The pre-aggregated table is indeed much smaller: ~~~ db=# \d+ List of relations Schema | Name | Type | Owner | Persistence | Size | Description --------+------+-------+-------+-------------+--------+------------- public | p | table | user | permanent | 120 kB | public | t | table | user | permanent | 422 MB | (2 rows) ~~~ And on my machine the last query takes ~1.5ms. Compare that to queries on the source data: ~~~ \timing on -- exact results SELECT a, percentile_cont(0.95) WITHIN GROUP (ORDER BY c) FROM t GROUP BY a ORDER BY a; ... Time: 6956.566 ms (00:06.957) -- tdigest estimate (no parallelism) SET max_parallel_workers_per_gather = 0; SELECT a, tdigest_percentile(c, 100, 0.95) FROM t GROUP BY a ORDER BY a; ... Time: 2873.116 ms (00:02.873) -- tdigest estimate (4 workers) SET max_parallel_workers_per_gather = 4; SELECT a, tdigest_percentile(c, 100, 0.95) FROM t GROUP BY a ORDER BY a; ... Time: 893.538 ms ~~~ This shows how much more efficient the t-digest estimate is compared to the exact query with `percentile_cont` (the difference would increase for larger data sets, due to increased overhead for spilling to disk). It also shows how effective the pre-aggregation can be. There are 121 rows in table `p` so with 120kB disk space that's ~1kB per row, each representing about 80k values. With 8B per value, that's ~640kB, i.e. a compression ratio of 640:1. As the digest size is not tied to the number of items, this will only improve for larger data set. ## Pre-aggregated data When dealing with data sets with a lot of redundancy (values repeating many times), it may be more efficient to partially pre-aggregate the data and use functions that allow specifying the number of occurrences for each value. This reduces the number of SQL-function calls. There are five such aggregate functions: * `tdigest_percentile(value double precision, count bigint, compression int, quantile double precision)` * `tdigest_percentile(value double precision, count bigint, compression int, quantiles double precision[])` * `tdigest_percentile_of(value double precision, count bigint, compression int, value double precision)` * `tdigest_percentile_of(value double precision, count bigint, compression int, values double precision[])` * `tdigest(value double precision, count bigint, compression int)` ## Incremental updates An existing t-digest may be updated incrementally, either by adding a single value, or by merging-in a whole t-digest. For example, it's possible to add 1000 random values to the t-digest like this: ``` DO LANGUAGE plpgsql $$ DECLARE r record; BEGIN FOR r IN (SELECT random() AS v FROM generate_series(1,1000)) LOOP UPDATE t SET d = tdigest_add(d, r.v); END LOOP; END $$; ``` The overhead of doing this is fairly high, though - the t-digest has to be deserialized and serialized over and over, for each value we're adding. That overhead may be reduced by pre-aggregating data, either into an array or a t-digest. ``` DO LANGUAGE plpgsql $$ DECLARE a double precision[]; BEGIN SELECT array_agg(random()) INTO a FROM generate_series(1,1000); UPDATE t SET d = tdigest_add(d, a); END $$; ``` Alternatively, it's possible to use pre-aggregated t-digest values instead of the arrays: ``` DO LANGUAGE plpgsql $$ DECLARE r record; BEGIN FOR r IN (SELECT mod(i,3) AS a, tdigest(random(),100) AS d FROM generate_series(1,1000) s(i) GROUP BY mod(i,3)) LOOP UPDATE t SET d = tdigest_union(d, r.d); END LOOP; END $$; ``` It may be undesirable to perform compaction after every incremental update (esp. when adding the values one by one). All functions in the incremental API allow disabling compaction by setting the `compact` parameter to `false`. The disadvantage is that without the compaction, the resulting digests may be somewhat larger (by a factor of 10). It's advisable to use either the multi-value functions (with compaction after each batch) if possible, or force compaction, e.g. by doing something like this: ``` UPDATE t SET d = tdigest_union(NULL, d); ``` ## Trimmed aggregates The extension provides two aggregate functions allowing to calculate trimmed (truncted) sum and average. * `tdigest_sum(digest tdigest, low double precision, high double precision)` * `tdigest_avg(digest tdigest, low double precision, high double precision)` The `low` and `high` parameters specify where to truncte the data. ## Functions ### `tdigest_percentile(value, accuracy, percentile)` Computes a requested percentile from the data, using a t-digest with the specified accuracy. #### Synopsis ``` SELECT tdigest_percentile(t.c, 100, 0.95) FROM t ``` #### Parameters - `value` - values to aggregate - `accuracy` - accuracy of the t-digest - `percentile` - value in [0, 1] specifying the percentile ### `tdigest_percentile(value, count, accuracy, percentile)` Computes a requested percentile from the data, using a t-digest with the specified accuracy. #### Synopsis ``` SELECT tdigest_percentile(t.c, t.a, 100, 0.95) FROM t ``` #### Parameters - `value` - values to aggregate - `count` - number of occurrences of the value - `accuracy` - accuracy of the t-digest - `percentile` - value in [0, 1] specifying the percentile ### `tdigest_percentile(value, accuracy, percentile[])` Computes requested percentiles from the data, using a t-digest with the specified accuracy. #### Synopsis ``` SELECT tdigest_percentile(t.c, 100, ARRAY[0.95, 0.99]) FROM t ``` #### Parameters - `value` - values to aggregate - `accuracy` - accuracy of the t-digest - `percentile[]` - array of values in [0, 1] specifying the percentiles ### `tdigest_percentile(value, count, accuracy, percentile[])` Computes requested percentiles from the data, using a t-digest with the specified accuracy. #### Synopsis ``` SELECT tdigest_percentile(t.c, t.a, 100, ARRAY[0.95, 0.99]) FROM t ``` #### Parameters - `value` - values to aggregate - `count` - number of occurrences of the value - `accuracy` - accuracy of the t-digest - `percentile[]` - array of values in [0, 1] specifying the percentiles ### `tdigest_percentile_of(value, accuracy, hypothetical_value)` Computes relative rank of a hypothetical value, using a t-digest with the specified accuracy. #### Synopsis ``` SELECT tdigest_percentile_of(t.c, 100, 139832.3) FROM t ``` #### Parameters - `value` - values to aggregate - `accuracy` - accuracy of the t-digest - `hypothetical_value` - hypothetical value ### `tdigest_percentile_of(value, count, accuracy, hypothetical_value)` Computes relative rank of a hypothetical value, using a t-digest with the specified accuracy. #### Synopsis ``` SELECT tdigest_percentile_of(t.c, t.a, 100, 139832.3) FROM t ``` #### Parameters - `value` - values to aggregate - `count` - number of occurrences of the value - `accuracy` - accuracy of the t-digest - `hypothetical_value` - hypothetical value ### `tdigest_percentile_of(value, accuracy, hypothetical_value[])` Computes relative ranks of a hypothetical values, using a t-digest with the specified accuracy. #### Synopsis ``` SELECT tdigest_percentile_of(t.c, 100, ARRAY[6343.43, 139832.3]) FROM t ``` #### Parameters - `value` - values to aggregate - `accuracy` - accuracy of the t-digest - `hypothetical_value` - hypothetical values ### `tdigest_percentile_of(value, count, accuracy, hypothetical_value[])` Computes relative ranks of a hypothetical values, using a t-digest with the specified accuracy. #### Synopsis ``` SELECT tdigest_percentile_of(t.c, t.a, 100, ARRAY[6343.43, 139832.3]) FROM t ``` #### Parameters - `value` - values to aggregate - `count` - number of occurrences of the value - `accuracy` - accuracy of the t-digest - `hypothetical_value` - hypothetical values ### `tdigest(value, accuracy)` Computes t-digest with the specified accuracy. #### Synopsis ``` SELECT tdigest(t.c, 100) FROM t ``` #### Parameters - `value` - values to aggregate - `accuracy` - accuracy of the t-digest ### `tdigest(value, count, accuracy)` Computes t-digest with the specified accuracy. The values are added with as many occurrences as determined by the count parameter. #### Synopsis ``` SELECT tdigest(t.c, t.a, 100) FROM t ``` #### Parameters - `value` - values to aggregate - `count` - number of occurrences for each value - `accuracy` - accuracy of the t-digest ### `tdigest_count(tdigest)` Returns number of items represented by the t-digest. #### Synopsis ``` SELECT tdigest_count(d) FROM ( SELECT tdigest(t.c, 100) FROM t ) foo ``` ### `tdigest_percentile(tdigest, percentile)` Computes requested percentile from the pre-computed t-digests. #### Synopsis ``` SELECT tdigest_percentile(d, 0.99) FROM ( SELECT tdigest(t.c, 100) FROM t ) foo ``` #### Parameters - `tdigest` - t-digest to aggregate and process - `percentile` - value in [0, 1] specifying the percentile ### `tdigest_percentile(tdigest, percentile[])` Computes requested percentiles from the pre-computed t-digests. #### Synopsis ``` SELECT tdigest_percentile(d, ARRAY[0.95, 0.99]) FROM ( SELECT tdigest(t.c, 100) FROM t ) foo ``` #### Parameters - `tdigest` - t-digest to aggregate and process - `percentile` - values in [0, 1] specifying the percentiles ### `tdigest_percentile_of(tdigest, hypothetical_value)` Computes relative rank of a hypothetical value, using a pre-computed t-digest. #### Synopsis ``` SELECT tdigest_percentile_of(d, 349834.1) FROM ( SELECT tdigest(t.c, 100) FROM t ) foo ``` #### Parameters - `tdigest` - t-digest to aggregate and process - `hypothetical_value` - hypothetical value ### `tdigest_percentile_of(tdigest, hypothetical_value[])` Computes relative ranks of hypothetical values, using a pre-computed t-digest. #### Synopsis ``` SELECT tdigest_percentile_of(d, ARRAY[438.256, 349834.1]) FROM ( SELECT tdigest(t.c, 100) FROM t ) foo ``` #### Parameters - `tdigest` - t-digest to aggregate and process - `hypothetical_value` - hypothetical values ### `tdigest_add(tdigest, double precision)` Performs incremental update of the t-digest by adding a single value. #### Synopsis ``` UPDATE t SET d = tdigest_add(d, random()); ``` #### Parameters - `tdigest` - t-digest to update - `element` - value to add to the digest - `compression` - compression t (used when t-digest is `NULL`) - `compact` - force compaction (default: true) ### `tdigest_add(tdigest, double precision[])` Performs incremental update of the t-digest by adding values from an array. #### Synopsis ``` UPDATE t SET d = tdigest_add(d, ARRAY[random(), random(), random()]); ``` #### Parameters - `tdigest` - t-digest to update - `elements` - array of values to add to the digest - `compression` - compression t (used when t-digest is `NULL`) - `compact` - force compaction (default: true) ### `tdigest_union(tdigest, tdigest)` Performs incremental update of the t-digest by merging-in another digest. #### Synopsis ``` WITH x AS (SELECT tdigest(random(), 100) AS d FROM generate_series(1,1000)) UPDATE t SET d = tdigest_union(t.d, x.d) FROM x; ``` #### Parameters - `tdigest` - t-digest to update - `tdigest_add` - t-digest to merge into `tdigest` - `compression` - compression t (used when t-digest is `NULL`) - `compact` - force compaction (default: true) ### `tdigest_json(tdigest)` Returns the t-digest as a JSON value. The function is also exposed as a cast from `tdigest` to `json`. #### Synopsis ``` SELECT tdigest_json(d) FROM ( SELECT tdigest(t.c, 100) AS d FROM t ) foo; SELECT CAST(d AS json) FROM ( SELECT tdigest(t.c, 100) AS d FROM t ) foo; ``` #### Parameters - `tdigest` - t-digest to cast to a `json` value ### `tdigest_double_array(tdigest)` Returns the t-digest as a `double precision[]` array. The function is also exposed as a cast from `tdigest` to `double precision[]`. #### Synopsis ``` SELECT tdigest_double_array(d) FROM ( SELECT tdigest(t.c, 100) AS d FROM t ) foo; SELECT CAST(d AS double precision[]) FROM ( SELECT tdigest(t.c, 100) AS d FROM t ) foo; ``` #### Parameters - `tdigest` - t-digest to cast to a `double precision[]` value ### `tdigest_avg(value, count, accuracy, low, high)` Computes trimmed mean of values, discarding values at the low and high end. The `low` and `high` values specify which part of the sample should be included in the mean, so e.g. `low = 0.1` and `high = 0.9` means 10% low and high values will be discarded. #### Synopsis ``` SELECT tdigest_avg(t.v, t.c, 100, 0.1, 0.9) FROM t ``` #### Parameters - `value` - values to aggregate - `count` - number of occurrences of the value - `accuracy` - accuracy of the t-digest - `low` - low threshold percentile (values below are discarded) - `high` - high threshold percentile (values above are discarded)v ### `tdigest_avg(tdigest, low, high)` Computes trimmed mean of values, discarding values at the low and high end. The `low` and `high` values specify which part of the sample should be included in the mean, so e.g. `low = 0.1` and `high = 0.9` means 10% low and high values will be discarded. #### Synopsis ``` SELECT tdigest_avg(d, 0.05, 0.95) FROM ( SELECT tdigest(t.c, 100) AS d FROM t ) foo; ``` #### Parameters - `tdigest` - tdigest to calculate mean from - `low` - low threshold percentile (values below are discarded) - `high` - high threshold percentile (values above are discarded) ### `tdigest_sum(value, accuracy, low, high)` Computes trimmed sum of values, discarding values at the low and high end. The `low` and `high` values specify which part of the sample should be included in the sum, so e.g. `low = 0.1` and `high = 0.9` means 10% low and high values will be discarded. #### Synopsis ``` SELECT tdigest_sum(t.v, 100, 0.1, 0.9) FROM t ``` #### Parameters - `value` - values to aggregate - `accuracy` - accuracy of the t-digest - `low` - low threshold percentile (values below are discarded) - `high` - high threshold percentile (values above are discarded) ### `tdigest_sum(value, count, accuracy, low, high)` Computes trimmed sum of values, discarding values at the low and high end. The `low` and `high` values specify which part of the sample should be included in the sum, so e.g. `low = 0.1` and `high = 0.9` means 10% low and high values will be discarded. #### Synopsis ``` SELECT tdigest_sum(t.v, t.c, 100, 0.1, 0.9) FROM t ``` #### Parameters - `value` - values to aggregate - `count` - number of occurrences of the value - `accuracy` - accuracy of the t-digest - `low` - low threshold percentile (values below are discarded) - `high` - high threshold percentile (values above are discarded) ### `tdigest_sum(tdigest, low, high)` Computes trimmed sum of values, discarding values at the low and high end. The `low` and `high` values specify which part of the sample should be included in the sum, so e.g. `low = 0.1` and `high = 0.9` means 10% low and high values will be discarded. #### Synopsis ``` SELECT tdigest_sum(d, 0.05, 0.95) FROM ( SELECT tdigest(t.c, 100) AS d FROM t ) foo; ``` #### Parameters - `tdigest` - tdigest to calculate sum from - `low` - low threshold percentile (values below are discarded) - `high` - high threshold percentile (values above are discarded) ### `tdigest_avg(tdigest, double precision, double precision)` Calculates average of values between the low and high threshold. #### Synopsis ``` SELECT tdigest_avg(tdigest(v, 100), 0.25, 0.75) FROM generate_series(1,10000) ``` #### Parameters - `tdigest` - t-digest to calculate average for - `low` - low threshold (truncate values below) - `high` - high threshold (truncate values above) ### `tdigest_sum(tdigest, double precision, double precision)` Calculates sum of values between the low and high threshold. #### Synopsis ``` SELECT tdigest_sum(tdigest(v, 100), 0.25, 0.75) FROM generate_series(1,10000) ``` #### Parameters - `tdigest` - t-digest to calculate sum for - `low` - low threshold (truncate values below) - `high` - high threshold (truncate values above) Notes ----- At the moment, the extension only supports `double precision` values, but it should not be very difficult to extend it to other numeric types (both integer and/or floating point, including `numeric`). Ultimately, it could support any data type with a concept of ordering and mean. The estimates do depend on the order of incoming data, and so may differ between runs. This applies especially to parallel queries, for which the workers generally see different subsets of data for each run (and build different digests, which are then combined together). License ------- This software is distributed under the terms of PostgreSQL license. See LICENSE or http://www.opensource.org/licenses/bsd-license.php for more details. [1] https://github.com/tdunning/t-digest [2] https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf [3] https://github.com/ajwerner/tdigestc [4] https://github.com/ajwerner/tdigest tdigest-1.4.1/TODO.md000066400000000000000000000012071450426374500142610ustar00rootroot00000000000000TODO ==== This is a simple list of possible improvements and enhancements, in mostly random order. So if you're thinking about contributing to the extension, this might be an inspiration. Of course, if you can think of yet another improvement, add it to this list. * Support other data types, not just "double precision". Supporting "numeric" seems natural, maybe we could support integer types too (possibly with rounding of the interpolated value). * Explore adding a "discrete" variant, similar to percentile_disc. I'm not sure this is actually possible, considering we're not keeping all the source data, forcing us to interpolate. tdigest-1.4.1/scripts/000077500000000000000000000000001450426374500146615ustar00rootroot00000000000000tdigest-1.4.1/scripts/accuracy.sql000066400000000000000000000201661450426374500172010ustar00rootroot00000000000000drop table if exists t; create table t (v double precision); drop table if exists datasets; create table datasets (ds_name text, ds_sql text); insert into datasets values ('uniform', 'with d as (select pow(random(), 1) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('pow(2)', 'with d as (select pow(random(), 2) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('pow(4)', 'with d as (select pow(random(), 4) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('pow(0.5)', 'with d as (select pow(random(), 0.5) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('pow(0.25)', 'with d as (select pow(random(), 0.25) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('1 - pow(2)', 'with d as (select 1.0 - pow(random(), 2) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('1 - pow(4)', 'with d as (select 1.0 - pow(random(), 4) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('1 - pow(0.5)', 'with d as (select 1.0 - pow(random(), 0.5) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('1 - pow(0.25)', 'with d as (select 1.0 - pow(random(), 0.25) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); create or replace function test_queries(npercentiles int, p double precision, nvalues int, minvalues int, maxvalues int, out dataset text, out simple_random double precision, out simple_asc double precision, out simple_desc double precision, out preagg_random double precision, out preagg_asc double precision, out preagg_desc double precision, out simple_asc_cmp double precision, out simple_desc_cmp double precision, out preagg_random_cmp double precision, out preagg_asc_cmp double precision, out preagg_desc_cmp double precision) returns setof record language plpgsql as $$ declare d record; perc_cont_percs double precision[]; simple_random_percs double precision[]; simple_asc_percs double precision[]; simple_desc_percs double precision[]; preagg_random_percs double precision[]; preagg_asc_percs double precision[]; preagg_desc_percs double precision[]; percs double precision[]; run int; tmp_simple_random double precision; tmp_simple_asc double precision; tmp_simple_desc double precision; tmp_preagg_random double precision; tmp_preagg_asc double precision; tmp_preagg_desc double precision; begin raise notice 'percentiles % range % values % min % max %', npercentiles, p, nvalues, minvalues, maxvalues; -- generate percentiles select array_agg(x) into percs from ( select i::double precision / npercentiles as x from generate_series(1,npercentiles) s(i) ) foo where x <= p or x > 1.0 - p; for d in (select * from datasets order by ds_name) loop simple_random := 0; simple_asc := 0; simple_desc := 0; preagg_random := 0; preagg_asc := 0; preagg_desc := 0; for run in 1..10 loop -- rebuild the table execute 'truncate t'; execute format(d.ds_sql, nvalues, minvalues, (maxvalues - minvalues)); execute 'analyze t'; dataset := d.ds_name; select percentile_cont(percs) within group (order by v) into perc_cont_percs from (select * from t) d; select tdigest_percentile(v, 100, percs) into simple_random_percs from (select * from t order by random()) d; select tdigest_percentile(v, 100, percs) into simple_asc_percs from (select * from t order by v) d; select tdigest_percentile(v, 100, percs) into simple_desc_percs from (select * from t order by v desc) d; select tdigest_percentile(v, c, 100, percs) into preagg_random_percs from (select v, count(*) as c from t group by v order by random()) d; select tdigest_percentile(v, c, 100, percs) into preagg_asc_percs from (select v, count(*) as c from t group by v order by v) d; select tdigest_percentile(v, c, 100, percs) into preagg_desc_percs from (select v, count(*) as c from t group by v order by v desc) d; select sqrt(sum(pow(a-b,2))) into tmp_simple_random from (select unnest(perc_cont_percs) as a, unnest(simple_random_percs) as b) d; select sqrt(sum(pow(a-b,2))) into tmp_simple_asc from (select unnest(perc_cont_percs) as a, unnest(simple_asc_percs) as b) d; select sqrt(sum(pow(a-b,2))) into tmp_simple_desc from (select unnest(perc_cont_percs) as a, unnest(simple_desc_percs) as b) d; select sqrt(sum(pow(a-b,2))) into tmp_preagg_random from (select unnest(perc_cont_percs) as a, unnest(preagg_random_percs) as b) d; select sqrt(sum(pow(a-b,2))) into tmp_preagg_asc from (select unnest(perc_cont_percs) as a, unnest(preagg_asc_percs) as b) d; select sqrt(sum(pow(a-b,2))) into tmp_preagg_desc from (select unnest(perc_cont_percs) as a, unnest(preagg_desc_percs) as b) d; simple_random := simple_random + tmp_simple_random / 10; simple_asc := simple_asc + tmp_simple_asc / 10; simple_desc := simple_desc + tmp_simple_desc / 10; preagg_random := preagg_random + tmp_preagg_random / 10; preagg_asc := preagg_asc + tmp_preagg_asc / 10; preagg_desc := preagg_desc + tmp_preagg_desc / 10; end loop; simple_asc_cmp := round((simple_asc / simple_random)::numeric, 2); simple_desc_cmp := round((simple_desc / simple_random)::numeric, 2); preagg_random_cmp := round((preagg_random / simple_random)::numeric, 2); preagg_asc_cmp := round((preagg_asc / simple_random)::numeric, 2); preagg_desc_cmp := round((preagg_desc / simple_random)::numeric, 2); simple_random := round(simple_random::numeric, 6); simple_asc := round(simple_asc::numeric, 6); simple_desc := round(simple_desc::numeric, 6); preagg_random := round(preagg_random::numeric, 6); preagg_asc := round(preagg_asc::numeric, 6); preagg_desc := round(preagg_desc::numeric, 6); return next; end loop; return; end; $$; select * from test_queries(1000, 0.01, 10000, 1, 1); select * from test_queries(1000, 0.05, 10000, 1, 1); select * from test_queries(1000, 0.1, 10000, 1, 1); select * from test_queries(1000, 0.2, 10000, 1, 1); select * from test_queries(1000, 0.3, 10000, 1, 1); select * from test_queries(1000, 0.4, 10000, 1, 1); select * from test_queries(1000, 0.5, 10000, 1, 1); select * from test_queries(1000, 0.01, 1000, 10, 20); select * from test_queries(1000, 0.05, 1000, 10, 20); select * from test_queries(1000, 0.1, 1000, 10, 20); select * from test_queries(1000, 0.2, 1000, 10, 20); select * from test_queries(1000, 0.3, 1000, 10, 20); select * from test_queries(1000, 0.4, 1000, 10, 20); select * from test_queries(1000, 0.5, 1000, 10, 20); select * from test_queries(1000, 0.01, 10000, 10, 20); select * from test_queries(1000, 0.05, 10000, 10, 20); select * from test_queries(1000, 0.1, 10000, 10, 20); select * from test_queries(1000, 0.2, 10000, 10, 20); select * from test_queries(1000, 0.3, 10000, 10, 20); select * from test_queries(1000, 0.4, 10000, 10, 20); select * from test_queries(1000, 0.5, 10000, 10, 20); tdigest-1.4.1/scripts/bechmark.sql000066400000000000000000000121051450426374500171550ustar00rootroot00000000000000drop table if exists t; create table t (v double precision); drop table if exists datasets; create table datasets (ds_name text, ds_sql text); insert into datasets values ('uniform', 'with d as (select pow(random(), 1) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('pow(2)', 'with d as (select pow(random(), 2) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('pow(4)', 'with d as (select pow(random(), 4) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('pow(0.5)', 'with d as (select pow(random(), 0.5) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('pow(0.25)', 'with d as (select pow(random(), 0.25) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('1 - pow(2)', 'with d as (select 1.0 - pow(random(), 2) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('1 - pow(4)', 'with d as (select 1.0 - pow(random(), 4) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('1 - pow(0.5)', 'with d as (select 1.0 - pow(random(), 0.5) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); insert into datasets values ('1 - pow(0.25)', 'with d as (select 1.0 - pow(random(), 0.25) as v from generate_series(1,%s)) insert into t select v from (select v, generate_series(1, %s + (%s * random())::int) from d) foo'); create or replace function query_timing(query text, loops int = 10, out avg_time double precision, out stdev_time double precision) returns record language plpgsql as $$ declare timings double precision[] := NULL; i int; start_ts timestamptz; end_ts timestamptz; delta_ts double precision; total_ts double precision; r record; begin total_ts := 0; for i in 1..loops loop start_ts := clock_timestamp(); execute $1; end_ts := clock_timestamp(); delta_ts := 1000 * (extract(epoch from end_ts) - extract(epoch from start_ts)); timings := array_append(timings, delta_ts); total_ts := total_ts + delta_ts; end loop; avg_time := (total_ts / loops); stdev_time := 0.0; for r in select unnest(timings) as t loop stdev_time := stdev_time + pow(r.t - avg_time,2); end loop; stdev_time := sqrt(stdev_time / loops); avg_time := round(avg_time::numeric, 3); return; end; $$; create or replace function test_queries(nvalues int, minvalues int, maxvalues int, out dataset text, out simple_random double precision, out simple_asc double precision, out simple_desc double precision, out preagg_random double precision, out preagg_asc double precision, out preagg_desc double precision) returns setof record language plpgsql as $$ declare d record; begin raise notice 'values % min % max %', nvalues, minvalues, maxvalues; for d in (select * from datasets order by ds_name) loop -- rebuild the table execute 'truncate t'; execute format(d.ds_sql, nvalues, minvalues, (maxvalues - minvalues)); execute 'analyze t'; dataset := d.ds_name; select q.avg_time into simple_random from query_timing('select tdigest(v, 100) from (select * from t order by random()) d') q; select q.avg_time into simple_asc from query_timing('select tdigest(v, 100) from (select * from t order by v) d') q; select q.avg_time into simple_desc from query_timing('select tdigest(v, 100) from (select * from t order by v desc) d') q; select q.avg_time into preagg_random from query_timing('select tdigest(v, c, 100) from (select v, count(*) as c from t group by v order by random()) d') q; select q.avg_time into preagg_asc from query_timing('select tdigest(v, c, 100) from (select v, count(*) as c from t group by v order by v) d') q; select q.avg_time into preagg_desc from query_timing('select tdigest(v, c, 100) from (select v, count(*) as c from t group by v order by v desc) d') q; return next; end loop; return; end; $$; select * from test_queries(1000, 1, 1); select * from test_queries(10000, 1, 1); select * from test_queries(100000, 1, 1); select * from test_queries(1000, 5, 10); select * from test_queries(10000, 5, 10); select * from test_queries(100000, 5, 10); select * from test_queries(1000, 20, 40); select * from test_queries(10000, 20, 40); select * from test_queries(100000, 20, 40); tdigest-1.4.1/tdigest--1.0.0--1.0.1.sql000066400000000000000000000000001450426374500165630ustar00rootroot00000000000000tdigest-1.4.1/tdigest--1.0.0.sql000066400000000000000000000162721450426374500160750ustar00rootroot00000000000000/* tdigest for the double precision */ CREATE OR REPLACE FUNCTION tdigest_add_double(p_pointer internal, p_element double precision, p_compression int) RETURNS internal AS 'tdigest', 'tdigest_add_double' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_double(p_pointer internal, p_element double precision, p_compression int, p_quantile double precision) RETURNS internal AS 'tdigest', 'tdigest_add_double' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_double_array(p_pointer internal, p_element double precision, p_compression int, p_quantile double precision[]) RETURNS internal AS 'tdigest', 'tdigest_add_double_array' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_double_values(p_pointer internal, p_element double precision, p_compression int, p_value double precision) RETURNS internal AS 'tdigest', 'tdigest_add_double_values' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_double_array_values(p_pointer internal, p_element double precision, p_compression int, p_value double precision[]) RETURNS internal AS 'tdigest', 'tdigest_add_double_array_values' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_percentiles(p_pointer internal) RETURNS double precision AS 'tdigest', 'tdigest_percentiles' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_array_percentiles(p_pointer internal) RETURNS double precision[] AS 'tdigest', 'tdigest_array_percentiles' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_percentiles_of(p_pointer internal) RETURNS double precision AS 'tdigest', 'tdigest_percentiles_of' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_array_percentiles_of(p_pointer internal) RETURNS double precision[] AS 'tdigest', 'tdigest_array_percentiles_of' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_combine(a internal, b internal) RETURNS internal AS 'tdigest', 'tdigest_combine' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_serial(a internal) RETURNS bytea AS 'tdigest', 'tdigest_serial' LANGUAGE C IMMUTABLE STRICT; CREATE OR REPLACE FUNCTION tdigest_deserial(a bytea, b internal) RETURNS internal AS 'tdigest', 'tdigest_deserial' LANGUAGE C IMMUTABLE STRICT; CREATE AGGREGATE tdigest_percentile(double precision, int, double precision) ( SFUNC = tdigest_add_double, STYPE = internal, FINALFUNC = tdigest_percentiles, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_percentile(double precision, int, double precision[]) ( SFUNC = tdigest_add_double_array, STYPE = internal, FINALFUNC = tdigest_array_percentiles, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_percentile_of(double precision, int, double precision) ( SFUNC = tdigest_add_double_values, STYPE = internal, FINALFUNC = tdigest_percentiles_of, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_percentile_of(double precision, int, double precision[]) ( SFUNC = tdigest_add_double_array_values, STYPE = internal, FINALFUNC = tdigest_array_percentiles_of, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE TYPE tdigest; CREATE OR REPLACE FUNCTION tdigest_in(cstring) RETURNS tdigest AS 'tdigest', 'tdigest_in' LANGUAGE C IMMUTABLE STRICT; CREATE OR REPLACE FUNCTION tdigest_out(tdigest) RETURNS cstring AS 'tdigest', 'tdigest_out' LANGUAGE C IMMUTABLE STRICT; CREATE OR REPLACE FUNCTION tdigest_send(tdigest) RETURNS bytea AS 'tdigest', 'tdigest_send' LANGUAGE C IMMUTABLE STRICT; CREATE OR REPLACE FUNCTION tdigest_recv(internal) RETURNS tdigest AS 'tdigest', 'tdigest_recv' LANGUAGE C IMMUTABLE STRICT; CREATE TYPE tdigest ( INPUT = tdigest_in, OUTPUT = tdigest_out, RECEIVE = tdigest_recv, SEND = tdigest_send, INTERNALLENGTH = variable, STORAGE = external ); CREATE OR REPLACE FUNCTION tdigest_digest(p_pointer internal) RETURNS tdigest AS 'tdigest', 'tdigest_digest' LANGUAGE C IMMUTABLE; CREATE AGGREGATE tdigest(double precision, int) ( SFUNC = tdigest_add_double, STYPE = internal, FINALFUNC = tdigest_digest, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE OR REPLACE FUNCTION tdigest_add_digest(p_pointer internal, p_element tdigest) RETURNS internal AS 'tdigest', 'tdigest_add_digest' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_digest(p_pointer internal, p_element tdigest, p_quantile double precision) RETURNS internal AS 'tdigest', 'tdigest_add_digest' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_digest_array(p_pointer internal, p_element tdigest, p_quantile double precision[]) RETURNS internal AS 'tdigest', 'tdigest_add_digest_array' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_digest_values(p_pointer internal, p_element tdigest, p_value double precision) RETURNS internal AS 'tdigest', 'tdigest_add_digest_values' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_digest_array_values(p_pointer internal, p_element tdigest, p_value double precision[]) RETURNS internal AS 'tdigest', 'tdigest_add_digest_array_values' LANGUAGE C IMMUTABLE; CREATE AGGREGATE tdigest_percentile(tdigest, double precision) ( SFUNC = tdigest_add_digest, STYPE = internal, FINALFUNC = tdigest_percentiles, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_percentile(tdigest, double precision[]) ( SFUNC = tdigest_add_digest_array, STYPE = internal, FINALFUNC = tdigest_array_percentiles, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_percentile_of(tdigest, double precision) ( SFUNC = tdigest_add_digest_values, STYPE = internal, FINALFUNC = tdigest_percentiles_of, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_percentile_of(tdigest, double precision[]) ( SFUNC = tdigest_add_digest_array_values, STYPE = internal, FINALFUNC = tdigest_array_percentiles_of, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest(tdigest) ( SFUNC = tdigest_add_digest, STYPE = internal, FINALFUNC = tdigest_digest, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE OR REPLACE FUNCTION tdigest_count(tdigest) RETURNS bigint AS 'tdigest', 'tdigest_count' LANGUAGE C IMMUTABLE STRICT; tdigest-1.4.1/tdigest--1.0.1--1.2.0.sql000066400000000000000000000067471450426374500166150ustar00rootroot00000000000000CREATE OR REPLACE FUNCTION tdigest_add_double_count(p_pointer internal, p_element double precision, p_count bigint, p_compression int) RETURNS internal AS 'tdigest', 'tdigest_add_double_count' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_double_count(p_pointer internal, p_element double precision, p_count bigint, p_compression int, p_quantile double precision) RETURNS internal AS 'tdigest', 'tdigest_add_double_count' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_double_array_count(p_pointer internal, p_element double precision, p_count bigint, p_compression int, p_quantile double precision[]) RETURNS internal AS 'tdigest', 'tdigest_add_double_array_count' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_double_values_count(p_pointer internal, p_element double precision, p_count bigint, p_compression int, p_value double precision) RETURNS internal AS 'tdigest', 'tdigest_add_double_values_count' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_double_array_values_count(p_pointer internal, p_element double precision, p_count bigint, p_compression int, p_value double precision[]) RETURNS internal AS 'tdigest', 'tdigest_add_double_array_values_count' LANGUAGE C IMMUTABLE; CREATE AGGREGATE tdigest(double precision, bigint, int) ( SFUNC = tdigest_add_double_count, STYPE = internal, FINALFUNC = tdigest_digest, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_percentile(double precision, bigint, int, double precision) ( SFUNC = tdigest_add_double_count, STYPE = internal, FINALFUNC = tdigest_percentiles, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_percentile(double precision, bigint, int, double precision[]) ( SFUNC = tdigest_add_double_array_count, STYPE = internal, FINALFUNC = tdigest_array_percentiles, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_percentile_of(double precision, bigint, int, double precision) ( SFUNC = tdigest_add_double_values_count, STYPE = internal, FINALFUNC = tdigest_percentiles_of, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_percentile_of(double precision, bigint, int, double precision[]) ( SFUNC = tdigest_add_double_array_values_count, STYPE = internal, FINALFUNC = tdigest_array_percentiles_of, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE OR REPLACE FUNCTION tdigest_add(p_digest tdigest, p_element double precision, p_compression int = NULL, p_compact bool = true) RETURNS tdigest AS 'tdigest', 'tdigest_add_double_increment' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add(p_digest tdigest, p_elements double precision[], p_compression int = NULL, p_compact bool = true) RETURNS tdigest AS 'tdigest', 'tdigest_add_double_array_increment' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_union(p_digest1 tdigest, p_digest2 tdigest, p_compact bool = true) RETURNS tdigest AS 'tdigest', 'tdigest_union_double_increment' LANGUAGE C IMMUTABLE; tdigest-1.4.1/tdigest--1.2.0--1.3.0.sql000066400000000000000000000077231450426374500166120ustar00rootroot00000000000000CREATE FUNCTION tdigest_json(tdigest) RETURNS json AS 'tdigest', 'tdigest_to_json' LANGUAGE C IMMUTABLE STRICT; CREATE CAST (tdigest AS json) WITH FUNCTION tdigest_json(tdigest) AS ASSIGNMENT; CREATE FUNCTION tdigest_double_array(tdigest) RETURNS double precision[] AS 'tdigest', 'tdigest_to_array' LANGUAGE C IMMUTABLE STRICT; CREATE CAST (tdigest AS double precision[]) WITH FUNCTION tdigest_double_array(tdigest) AS ASSIGNMENT; -- trimmed aggregates CREATE OR REPLACE FUNCTION tdigest_add_double_trimmed(p_pointer internal, p_element double precision, p_compression int, p_low double precision, p_high double precision) RETURNS internal AS 'tdigest', 'tdigest_add_double_trimmed' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_double_count_trimmed(p_pointer internal, p_element double precision, p_count bigint, p_compression int, p_low double precision, p_high double precision) RETURNS internal AS 'tdigest', 'tdigest_add_double_count_trimmed' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_add_digest_trimmed(p_pointer internal, p_element tdigest, p_low double precision, p_high double precision) RETURNS internal AS 'tdigest', 'tdigest_add_digest_trimmed' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_trimmed_avg(p_pointer internal) RETURNS double precision AS 'tdigest', 'tdigest_trimmed_avg' LANGUAGE C IMMUTABLE; CREATE OR REPLACE FUNCTION tdigest_trimmed_sum(p_pointer internal) RETURNS double precision AS 'tdigest', 'tdigest_trimmed_sum' LANGUAGE C IMMUTABLE; CREATE AGGREGATE tdigest_avg(double precision, int, double precision, double precision) ( SFUNC = tdigest_add_double_trimmed, STYPE = internal, FINALFUNC = tdigest_trimmed_avg, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_avg(double precision, bigint, int, double precision, double precision) ( SFUNC = tdigest_add_double_count_trimmed, STYPE = internal, FINALFUNC = tdigest_trimmed_avg, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_avg(tdigest, double precision, double precision) ( SFUNC = tdigest_add_digest_trimmed, STYPE = internal, FINALFUNC = tdigest_trimmed_avg, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_sum(double precision, int, double precision, double precision) ( SFUNC = tdigest_add_double_trimmed, STYPE = internal, FINALFUNC = tdigest_trimmed_sum, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_sum(double precision, bigint, int, double precision, double precision) ( SFUNC = tdigest_add_double_count_trimmed, STYPE = internal, FINALFUNC = tdigest_trimmed_sum, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); CREATE AGGREGATE tdigest_sum(tdigest, double precision, double precision) ( SFUNC = tdigest_add_digest_trimmed, STYPE = internal, FINALFUNC = tdigest_trimmed_sum, SERIALFUNC = tdigest_serial, DESERIALFUNC = tdigest_deserial, COMBINEFUNC = tdigest_combine, PARALLEL = SAFE ); -- non-aggregate functions to extract trimmed sum/avg from a tdigest CREATE OR REPLACE FUNCTION tdigest_digest_sum(p_digest tdigest, p_low double precision = 0.0, p_high double precision = 1.0) RETURNS double precision AS 'tdigest', 'tdigest_digest_sum' LANGUAGE C IMMUTABLE STRICT; CREATE OR REPLACE FUNCTION tdigest_digest_avg(p_digest tdigest, p_low double precision = 0.0, p_high double precision = 1.0) RETURNS double precision AS 'tdigest', 'tdigest_digest_avg' LANGUAGE C IMMUTABLE STRICT; tdigest-1.4.1/tdigest--1.3.0--1.4.0.sql000066400000000000000000000000001450426374500165710ustar00rootroot00000000000000tdigest-1.4.1/tdigest--1.4.0--1.4.1.sql000066400000000000000000000000001450426374500165730ustar00rootroot00000000000000tdigest-1.4.1/tdigest.c000066400000000000000000002546301450426374500150130ustar00rootroot00000000000000/* * tdigest.c - implementation of t-digest for PostgreSQL, useful for estimation * of quantiles, percentiles, trimmed means, and various similar metrics. * * Copyright (C) Tomas Vondra, 2019 */ #include #include #include #include #include #include #include "postgres.h" #include "libpq/pqformat.h" #include "utils/array.h" #include "utils/builtins.h" #include "utils/lsyscache.h" #include "catalog/pg_type.h" PG_MODULE_MAGIC; /* * A centroid, used both for in-memory and on-disk storage. */ typedef struct centroid_t { double mean; int64 count; } centroid_t; /* * On-disk representation of the t-digest. */ typedef struct tdigest_t { int32 vl_len_; /* varlena header (do not touch directly!) */ int32 flags; /* reserved for future use (versioning, ...) */ int64 count; /* number of items added to the t-digest */ int compression; /* compression used to build the digest */ int ncentroids; /* number of cetroids in the array */ centroid_t centroids[FLEXIBLE_ARRAY_MEMBER]; } tdigest_t; /* * Centroids used to store (sum,count), but we want to store (mean,count) * because that allows us to prevent rounding errors e.g. when merging * centroids with the same mean, or adding the same value to the centroid. * * To handle existing tdigest data in backwards-compatible way, we have * a flag marking the new ones with mean, and we convert the old values. */ #define TDIGEST_STORES_MEAN 0x0001 /* * An aggregate state, representing the t-digest and some additional info * (requested percentiles, ...). * * When adding new values to the t-digest, we add them as centroids into a * separate "uncompacted" part of the array. While centroids need more space * than plain points (24B vs. 8B), making the aggregate state quite a bit * larger, it does simplify the code quite a bit as it only needs to deal * with single struct type instead of two (centroids + points). But maybe * we should separate those two things in the future. * * XXX We only ever use one of values/percentiles, never both at the same * time. In the future the values may use a different data types than double * (e.g. numeric), so we keep both fields. */ typedef struct tdigest_aggstate_t { /* basic t-digest fields (centroids at the end) */ int64 count; /* number of samples in the digest */ int ncompactions; /* number of merges/compactions */ int compression; /* compression algorithm */ int ncentroids; /* number of centroids */ int ncompacted; /* compacted part */ /* array of requested percentiles and values */ int npercentiles; /* number of percentiles */ int nvalues; /* number of values */ double trim_low; /* low threshold (for trimmed aggs) */ double trim_high; /* high threshold (for trimmed aggs) */ double *percentiles; /* array of percentiles (if any) */ double *values; /* array of values (if any) */ centroid_t *centroids; /* centroids for the digest */ } tdigest_aggstate_t; static int centroid_cmp(const void *a, const void *b); #define PG_GETARG_TDIGEST(x) (tdigest_t *) PG_DETOAST_DATUM(PG_GETARG_DATUM(x)) /* * Size of buffer for incoming data, as a multiple of the compression value. * Quoting from the t-digest paper: * * The constant of proportionality should be determined by experiment, but * micro-benchmarks indicate that C2/C1 is in the range from 5 to 20 for * a single core of an Intel i7 processor. In these micro-benchmarks, * increasing the buffer size to (10 * delta) dramatically improves the * average speed but further buffer size increases have much less effect. * * XXX Maybe make the coefficient user-defined, with some reasonable limits * (say 2 - 20), so that users can pick the right trade-off between speed * and memory usage. */ #define BUFFER_SIZE(compression) (10 * (compression)) #define AssertBounds(index, length) Assert((index) >= 0 && (index) < (length)) #define MIN_COMPRESSION 10 #define MAX_COMPRESSION 10000 /* prototypes */ PG_FUNCTION_INFO_V1(tdigest_add_double_array); PG_FUNCTION_INFO_V1(tdigest_add_double_array_count); PG_FUNCTION_INFO_V1(tdigest_add_double_array_values); PG_FUNCTION_INFO_V1(tdigest_add_double_array_values_count); PG_FUNCTION_INFO_V1(tdigest_add_double); PG_FUNCTION_INFO_V1(tdigest_add_double_count); PG_FUNCTION_INFO_V1(tdigest_add_double_values); PG_FUNCTION_INFO_V1(tdigest_add_double_values_count); PG_FUNCTION_INFO_V1(tdigest_add_digest_array); PG_FUNCTION_INFO_V1(tdigest_add_digest_array_values); PG_FUNCTION_INFO_V1(tdigest_add_digest); PG_FUNCTION_INFO_V1(tdigest_add_digest_values); PG_FUNCTION_INFO_V1(tdigest_array_percentiles); PG_FUNCTION_INFO_V1(tdigest_array_percentiles_of); PG_FUNCTION_INFO_V1(tdigest_percentiles); PG_FUNCTION_INFO_V1(tdigest_percentiles_of); PG_FUNCTION_INFO_V1(tdigest_digest); PG_FUNCTION_INFO_V1(tdigest_serial); PG_FUNCTION_INFO_V1(tdigest_deserial); PG_FUNCTION_INFO_V1(tdigest_combine); PG_FUNCTION_INFO_V1(tdigest_in); PG_FUNCTION_INFO_V1(tdigest_out); PG_FUNCTION_INFO_V1(tdigest_send); PG_FUNCTION_INFO_V1(tdigest_recv); PG_FUNCTION_INFO_V1(tdigest_count); PG_FUNCTION_INFO_V1(tdigest_to_json); PG_FUNCTION_INFO_V1(tdigest_to_array); PG_FUNCTION_INFO_V1(tdigest_add_double_increment); PG_FUNCTION_INFO_V1(tdigest_add_double_array_increment); PG_FUNCTION_INFO_V1(tdigest_union_double_increment); PG_FUNCTION_INFO_V1(tdigest_add_double_trimmed); PG_FUNCTION_INFO_V1(tdigest_add_double_count_trimmed); PG_FUNCTION_INFO_V1(tdigest_add_digest_trimmed); PG_FUNCTION_INFO_V1(tdigest_add_digest_count_trimmed); PG_FUNCTION_INFO_V1(tdigest_trimmed_avg); PG_FUNCTION_INFO_V1(tdigest_trimmed_sum); PG_FUNCTION_INFO_V1(tdigest_digest_sum); PG_FUNCTION_INFO_V1(tdigest_digest_avg); Datum tdigest_add_double_array(PG_FUNCTION_ARGS); Datum tdigest_add_double_array_count(PG_FUNCTION_ARGS); Datum tdigest_add_double_array_values(PG_FUNCTION_ARGS); Datum tdigest_add_double_array_values_count(PG_FUNCTION_ARGS); Datum tdigest_add_double(PG_FUNCTION_ARGS); Datum tdigest_add_double_count(PG_FUNCTION_ARGS); Datum tdigest_add_double_values(PG_FUNCTION_ARGS); Datum tdigest_add_double_values_count(PG_FUNCTION_ARGS); Datum tdigest_add_digest_array(PG_FUNCTION_ARGS); Datum tdigest_add_digest_array_values(PG_FUNCTION_ARGS); Datum tdigest_add_digest(PG_FUNCTION_ARGS); Datum tdigest_add_digest_values(PG_FUNCTION_ARGS); Datum tdigest_array_percentiles(PG_FUNCTION_ARGS); Datum tdigest_array_percentiles_of(PG_FUNCTION_ARGS); Datum tdigest_percentiles(PG_FUNCTION_ARGS); Datum tdigest_percentiles_of(PG_FUNCTION_ARGS); Datum tdigest_digest(PG_FUNCTION_ARGS); Datum tdigest_serial(PG_FUNCTION_ARGS); Datum tdigest_deserial(PG_FUNCTION_ARGS); Datum tdigest_combine(PG_FUNCTION_ARGS); Datum tdigest_in(PG_FUNCTION_ARGS); Datum tdigest_out(PG_FUNCTION_ARGS); Datum tdigest_send(PG_FUNCTION_ARGS); Datum tdigest_recv(PG_FUNCTION_ARGS); Datum tdigest_count(PG_FUNCTION_ARGS); Datum tdigest_add_double_increment(PG_FUNCTION_ARGS); Datum tdigest_add_double_array_increment(PG_FUNCTION_ARGS); Datum tdigest_union_double_increment(PG_FUNCTION_ARGS); Datum tdigest_to_json(PG_FUNCTION_ARGS); Datum tdigest_to_array(PG_FUNCTION_ARGS); Datum tdigest_add_double_trimmed(PG_FUNCTION_ARGS); Datum tdigest_add_double_count_trimmed(PG_FUNCTION_ARGS); Datum tdigest_add_digest_trimmed(PG_FUNCTION_ARGS); Datum tdigest_add_digest_count_trimmed(PG_FUNCTION_ARGS); Datum tdigest_trimmed_avg(PG_FUNCTION_ARGS); Datum tdigest_trimmed_sum(PG_FUNCTION_ARGS); Datum tdigest_digest_sum(PG_FUNCTION_ARGS); Datum tdigest_digest_avg(PG_FUNCTION_ARGS); static Datum double_to_array(FunctionCallInfo fcinfo, double * d, int len); static double *array_to_double(FunctionCallInfo fcinfo, ArrayType *v, int * len); /* basic checks on the t-digest (proper sum of counts, ...) */ static void AssertCheckTDigest(tdigest_t *digest) { #ifdef USE_ASSERT_CHECKING int i; int64 cnt; Assert(digest->flags == 0 || digest->flags == TDIGEST_STORES_MEAN); Assert((digest->compression >= MIN_COMPRESSION) && (digest->compression <= MAX_COMPRESSION)); Assert(digest->ncentroids >= 0); Assert(digest->ncentroids <= BUFFER_SIZE(digest->compression)); cnt = 0; for (i = 0; i < digest->ncentroids; i++) { Assert(digest->centroids[i].count > 0); cnt += digest->centroids[i].count; /* FIXME also check this does work with the scale function */ } Assert(VARSIZE_ANY(digest) == offsetof(tdigest_t, centroids) + digest->ncentroids * sizeof(centroid_t)); Assert(digest->count == cnt); #endif } static void AssertCheckTDigestAggState(tdigest_aggstate_t *state) { #ifdef USE_ASSERT_CHECKING int i; int64 cnt; Assert(state->npercentiles >= 0); Assert(((state->npercentiles == 0) && (state->percentiles == NULL)) || ((state->npercentiles > 0) && (state->percentiles != NULL))); for (i = 0; i < state->npercentiles; i++) Assert((state->percentiles[i] >= 0.0) && (state->percentiles[i] <= 1.0)); Assert((state->compression >= MIN_COMPRESSION) && (state->compression <= MAX_COMPRESSION)); Assert(state->ncentroids >= 0); Assert(state->ncentroids <= BUFFER_SIZE(state->compression)); cnt = 0; for (i = 0; i < state->ncentroids; i++) { Assert(state->centroids[i].count > 0); cnt += state->centroids[i].count; /* XXX maybe check this does work with the scale function */ } Assert(state->count == cnt); #endif } static void reverse_centroids(centroid_t *centroids, int ncentroids) { int start = 0, end = (ncentroids - 1); while (start < end) { centroid_t tmp = centroids[start]; centroids[start] = centroids[end]; centroids[end] = tmp; start++; end--; } } static void rebalance_centroids(centroid_t *centroids, int ncentroids, int64 weight_before, int64 weight_after) { double ratio = weight_before / (double) weight_after; int64 count_before = 0; int64 count_after = 0; int start = 0; int end = (ncentroids - 1); int i; centroid_t *scratch = palloc(sizeof(centroid_t) * ncentroids); i = 0; while (i < ncentroids) { while (i < ncentroids) { scratch[start] = centroids[i]; count_before += centroids[i].count; i++; start++; if (count_before > count_after * ratio) break; } while (i < ncentroids) { scratch[end] = centroids[i]; count_after += centroids[i].count; i++; end--; if (count_before < count_after * ratio) break; } } memcpy(centroids, scratch, sizeof(centroid_t) * ncentroids); pfree(scratch); } /* * Sort centroids in the digest. * * We have to sort the whole array, because we don't just simply sort the * centroids - we do the rebalancing of items with the same mean too. */ static void tdigest_sort(tdigest_aggstate_t *state) { int i; int64 count_so_far; int64 next_group; int64 median_count; /* do qsort on the non-sorted part */ pg_qsort(state->centroids, state->ncentroids, sizeof(centroid_t), centroid_cmp); /* * The centroids are sorted by (mean,count). That's fine for centroids up * to median, but above median this ordering is incorrect for centroids * with the same mean (or for groups crossing the median boundary). To fix * this we 'rebalance' those groups. Those entirely above median can be * simply sorted in the opposite order, while those crossing the median * need to be rebalanced depending on what part is below/above median. */ count_so_far = 0; next_group = 0; /* includes count_so_far */ median_count = (state->count / 2); /* * Split the centroids into groups with the same mean, process each group * depending on whether it falls before/after median. */ i = 0; while (i < state->ncentroids) { int j = i; int group_size = 0; /* determine the end of the group */ while ((j < state->ncentroids) && (state->centroids[i].mean == state->centroids[j].mean)) { next_group += state->centroids[j].count; group_size++; j++; } /* * We can ignore groups of size 1 (number of centroids, not counts), as * those are trivially sorted. */ if (group_size > 1) { if (count_so_far >= median_count) { /* group fully above median - reverse the order */ reverse_centroids(&state->centroids[i], group_size); } else if (next_group >= median_count) /* group split by median */ { rebalance_centroids(&state->centroids[i], group_size, median_count - count_so_far, next_group - median_count); } } i = j; count_so_far = next_group; } } /* * Perform compaction of the t-digest, i.e. merge the centroids as required * by the compression parameter. * * We always keep the data sorted in ascending order. This way we can reuse * the sort between compactions, and also when computing the quantiles. * * XXX Switch the direction regularly, to eliminate possible bias and improve * accuracy, as mentioned in the paper. * * XXX This initially used the k1 scale function, but the implementation was * not limiting the number of centroids for some reason (it might have been * a bug in the implementation, of course). The current code is a modified * copy from ajwerner [1], and AFAIK it's the k2 function, it's much simpler * and generally works quite nicely. * * [1] https://github.com/ajwerner/tdigestc/blob/master/go/tdigest.c */ static void tdigest_compact(tdigest_aggstate_t *state) { int i; int cur; /* current centroid */ int64 count_so_far; int64 total_count; double denom; double normalizer; int start; int step; int n; AssertCheckTDigestAggState(state); /* if the digest is fully compacted, it's been already compacted */ if (state->ncompacted == state->ncentroids) return; tdigest_sort(state); state->ncompactions++; if (state->ncompactions % 2 == 0) { start = 0; step = 1; } else { start = state->ncentroids - 1; step = -1; } total_count = state->count; denom = 2 * M_PI * total_count * log(total_count); normalizer = state->compression / denom; cur = start; count_so_far = 0; n = 1; for (i = start + step; (i >= 0) && (i < state->ncentroids); i += step) { int64 proposed_count; double q0; double q2; double z; bool should_add; proposed_count = state->centroids[cur].count + state->centroids[i].count; z = proposed_count * normalizer; q0 = count_so_far / (double) total_count; q2 = (count_so_far + proposed_count) / (double) total_count; should_add = (z <= (q0 * (1 - q0))) && (z <= (q2 * (1 - q2))); if (should_add) { /* * If both centroids have the same mean, don't calculate it again. * The recaulculation may cause rounding errors, so that the means * would drift apart over time. We want to keep them equal for as * long as possible. */ if (state->centroids[cur].mean != state->centroids[i].mean) { double sum; int64 count; sum = state->centroids[i].count * state->centroids[i].mean; sum += state->centroids[cur].count * state->centroids[cur].mean; count = state->centroids[i].count; count += state->centroids[cur].count; state->centroids[cur].mean = (sum / count); } /* XXX Do this after possibly recalculating the mean. */ state->centroids[cur].count += state->centroids[i].count; } else { count_so_far += state->centroids[cur].count; cur += step; n++; state->centroids[cur] = state->centroids[i]; } if (cur != i) { state->centroids[i].count = 0; state->centroids[i].mean = 0; } } state->ncentroids = n; state->ncompacted = state->ncentroids; if (step < 0) memmove(state->centroids, &state->centroids[cur], n * sizeof(centroid_t)); AssertCheckTDigestAggState(state); Assert(state->ncentroids < BUFFER_SIZE(state->compression)); } /* * Estimate requested quantiles from the t-digest agg state. */ static void tdigest_compute_quantiles(tdigest_aggstate_t *state, double *result) { int i, j; AssertCheckTDigestAggState(state); /* * Trigger a compaction, which also sorts the data. * * XXX maybe just do a sort here, which should give us a bit more accurate * results, probably. */ tdigest_compact(state); for (i = 0; i < state->npercentiles; i++) { double count; double delta; double goal = (state->percentiles[i] * state->count); bool on_the_right; centroid_t *prev, *next; centroid_t *c = NULL; double slope; /* first centroid for percentile 1.0 */ if (state->percentiles[i] == 0.0) { c = &state->centroids[0]; result[i] = c->mean; continue; } /* last centroid for percentile 1.0 */ if (state->percentiles[i] == 1.0) { c = &state->centroids[state->ncentroids - 1]; result[i] = c->mean; continue; } /* walk throught the centroids and count number of items */ count = 0; for (j = 0; j < state->ncentroids; j++) { c = &state->centroids[j]; /* have we exceeded the expected count? */ if (count + c->count > goal) break; /* account for the centroid */ count += c->count; } delta = goal - count - (c->count / 2.0); /* * double arithmetics, so don't compare to 0.0 direcly, it's enough * to be "close enough" */ if (fabs(delta) < 0.000000001) { result[i] = c->mean; continue; } on_the_right = (delta > 0.0); /* * for extreme percentiles we might end on the right of the last node or on the * left of the first node, instead of interpolating we return the mean of the node */ if ((on_the_right && (j+1) >= state->ncentroids) || (!on_the_right && (j-1) < 0)) { result[i] = c->mean; continue; } if (on_the_right) { prev = &state->centroids[j]; AssertBounds(j+1, state->ncentroids); next = &state->centroids[j+1]; count += (prev->count / 2.0); } else { AssertBounds(j-1, state->ncentroids); prev = &state->centroids[j-1]; next = &state->centroids[j]; count -= (prev->count / 2.0); } slope = (next->mean - prev->mean) / (next->count / 2.0 + prev->count / 2.0); result[i] = prev->mean + slope * (goal - count); } } /* * Estimate inverse of quantile given a value from the t-digest agg state. * * Essentially an inverse to tdigest_compute_quantiles. */ static void tdigest_compute_quantiles_of(tdigest_aggstate_t *state, double *result) { int i; AssertCheckTDigestAggState(state); /* * Trigger a compaction, which also sorts the data. * * XXX maybe just do a sort here, which should give us a bit more accurate * results, probably. */ tdigest_compact(state); for (i = 0; i < state->nvalues; i++) { int j; double count; centroid_t *c = NULL; centroid_t *prev; double value = state->values[i]; double m, x; count = 0; for (j = 0; j < state->ncentroids; j++) { c = &state->centroids[j]; if (c->mean >= value) break; count += c->count; } /* the value exactly matches the mean */ if (value == c->mean) { int64 count_at_value = 0; /* * There may be multiple centroids with this mean (i.e. containing * this value), so find all of them and sum their weights. */ while (state->centroids[j].mean == value && j < state->ncentroids) { count_at_value += state->centroids[j].count; j++; } result[i] = (count + (count_at_value / 2.0)) / state->count; continue; } else if (value > c->mean) /* past the largest */ { result[i] = 1; continue; } else if (j == 0) /* past the smallest */ { result[i] = 0; continue; } /* * The value lies somewhere between two centroids. We want to figure out * where along the line from the prev node to this node the value is. * * FIXME What if there are multiple centroids with the same mean as the * prev/curr centroid? This probably needs to lookup all of them and sum * their counts, just like we did in case of the exact match, no? */ prev = c - 1; count -= (prev->count / 2); /* * We assume for both prev/curr centroid, half the count is on left/righ, * so between them we have (prev->count/2 + curr->count/2). At zero we * are in prev->mean and at (prev->count/2 + curr->count/2) we're at * curr->mean. */ m = (c->mean - prev->mean) / (c->count / 2.0 + prev->count / 2.0); x = (value - prev->mean) / m; result[i] = (double) (count + x) / state->count; } } /* add a value to the t-digest, trigger a compaction if full */ static void tdigest_add(tdigest_aggstate_t *state, double v) { int compression = state->compression; int ncentroids = state->ncentroids; AssertCheckTDigestAggState(state); /* make sure we have space for the value */ Assert(state->ncentroids < BUFFER_SIZE(compression)); /* for a single point, the value is both sum and mean */ state->centroids[ncentroids].count = 1; state->centroids[ncentroids].mean = v; state->ncentroids++; state->count++; Assert(state->ncentroids <= BUFFER_SIZE(compression)); /* if the buffer got full, trigger compaction here so that next * insert has free space */ if (state->ncentroids == BUFFER_SIZE(compression)) tdigest_compact(state); } /* * Add a centroid (possibly with count not equal to 1) to the t-digest, * triggers a compaction when buffer full. */ static void tdigest_add_centroid(tdigest_aggstate_t *state, double mean, int64 count) { int compression = state->compression; int ncentroids = state->ncentroids; AssertCheckTDigestAggState(state); /* make sure we have space for the value */ Assert(state->ncentroids < BUFFER_SIZE(compression)); /* for a single point, the value is both sum and mean */ state->centroids[ncentroids].count = count; state->centroids[ncentroids].mean = mean; state->ncentroids++; state->count += count; Assert(state->ncentroids <= BUFFER_SIZE(compression)); /* if the buffer got full, trigger compaction here so that next * insert has free space */ if (state->ncentroids == BUFFER_SIZE(compression)) tdigest_compact(state); } /* allocate t-digest with enough space for a requested number of centroids */ static tdigest_t * tdigest_allocate(int ncentroids) { Size len; tdigest_t *digest; char *ptr; len = offsetof(tdigest_t, centroids) + ncentroids * sizeof(centroid_t); /* we pre-allocate the array for all centroids and also the buffer for incoming data */ ptr = palloc(len); SET_VARSIZE(ptr, len); digest = (tdigest_t *) ptr; digest->flags = 0; digest->ncentroids = 0; digest->count = 0; digest->compression = 0; /* new tdigest are automatically storing mean */ digest->flags |= TDIGEST_STORES_MEAN; return digest; } /* * tdigest_update_format * Update t-digest format to represent centroids as (mean,count). * * Switches the centroids from (sum,count) to (mean,count), so that all * the places processing centroids can use just the new format. * * If the digest already uses the new format, this is a no-op. Otherwise * a modified copy of the digest is returned. * * XXX This does not affect on-disk representation of existing digests, * we create just an in-memory version of the digest. Only when the * digest gets modified a new format will be written back. */ static tdigest_t * tdigest_update_format(tdigest_t *digest) { int i; int s; char *ptr; /* if already new format, we're done */ if (digest->flags & TDIGEST_STORES_MEAN) return digest; /* * We'll convert the digest so that centroids use means, but we must * not modify the input digest - it might be just a pointer to data * buffer, or something like that. So we have to create a copy first. */ s = VARSIZE_ANY(digest); ptr = palloc(s); memcpy(ptr, digest, s); digest = (tdigest_t *) ptr; /* And now tweak the contents of the copy. */ for (i = 0; i < digest->ncentroids; i++) { digest->centroids[i].mean = digest->centroids[i].mean / digest->centroids[i].count; } digest->flags |= TDIGEST_STORES_MEAN; return digest; } /* * allocate a tdigest aggregate state, along with space for percentile(s) * and value(s) requested when calling the aggregate function */ static tdigest_aggstate_t * tdigest_aggstate_allocate(int npercentiles, int nvalues, int compression) { Size len; tdigest_aggstate_t *state; char *ptr; /* at least one of those values is 0 */ Assert(nvalues == 0 || npercentiles == 0); /* * We allocate a single chunk for the struct including percentiles and * centroids (including extra buffer for new data). */ len = MAXALIGN(sizeof(tdigest_aggstate_t)) + MAXALIGN(sizeof(double) * npercentiles) + MAXALIGN(sizeof(double) * nvalues) + (BUFFER_SIZE(compression) * sizeof(centroid_t)); ptr = palloc0(len); state = (tdigest_aggstate_t *) ptr; ptr += MAXALIGN(sizeof(tdigest_aggstate_t)); state->nvalues = nvalues; state->npercentiles = npercentiles; state->compression = compression; if (npercentiles > 0) { state->percentiles = (double *) ptr; ptr += MAXALIGN(sizeof(double) * npercentiles); } if (nvalues > 0) { state->values = (double *) ptr; ptr += MAXALIGN(sizeof(double) * nvalues); } state->centroids = (centroid_t *) ptr; ptr += (BUFFER_SIZE(compression) * sizeof(centroid_t)); Assert(ptr == (char *) state + len); return state; } static tdigest_t * tdigest_aggstate_to_digest(tdigest_aggstate_t *state, bool compact) { int i; tdigest_t *digest; if (compact) tdigest_compact(state); digest = tdigest_allocate(state->ncentroids); digest->count = state->count; digest->ncentroids = state->ncentroids; digest->compression = state->compression; for (i = 0; i < state->ncentroids; i++) { digest->centroids[i].mean = state->centroids[i].mean; digest->centroids[i].count = state->centroids[i].count; } return digest; } /* check that the requested percentiles are valid */ static void check_percentiles(double *percentiles, int npercentiles) { int i; for (i = 0; i < npercentiles; i++) { if ((percentiles[i] < 0.0) || (percentiles[i] > 1.0)) elog(ERROR, "invalid percentile value %f, should be in [0.0, 1.0]", percentiles[i]); } } static void check_compression(int compression) { if (compression < MIN_COMPRESSION || compression > MAX_COMPRESSION) elog(ERROR, "invalid compression value %d", compression); } static void check_trim_values(double low, double high) { if (low < 0.0) elog(ERROR, "invalid low percentile value %f, should be in [0.0, 1.0]", low); if (high > 1.0) elog(ERROR, "invalid high percentile value %f, should be in [0.0, 1.0]", high); if (low >= high) elog(ERROR, "invalid low/high percentile values %f/%f, should be low < high", low, high); } /* * Add a value to the tdigest (create one if needed). Transition function * for tdigest aggregate with a single percentile. */ Datum tdigest_add_double(PG_FUNCTION_ARGS) { tdigest_aggstate_t *state; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_double called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } /* if there's no digest allocated, create it now */ if (PG_ARGISNULL(0)) { int compression = PG_GETARG_INT32(2); double *percentiles = NULL; int npercentiles = 0; MemoryContext oldcontext; check_compression(compression); oldcontext = MemoryContextSwitchTo(aggcontext); if (PG_NARGS() >= 4) { percentiles = (double *) palloc(sizeof(double)); percentiles[0] = PG_GETARG_FLOAT8(3); npercentiles = 1; check_percentiles(percentiles, npercentiles); } state = tdigest_aggstate_allocate(npercentiles, 0, compression); if (percentiles) { memcpy(state->percentiles, percentiles, sizeof(double) * npercentiles); pfree(percentiles); } MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); tdigest_add(state, PG_GETARG_FLOAT8(1)); PG_RETURN_POINTER(state); } /* * Generate a t-digest representing a value with a given count. * * This is an alternative to using a single centroid, representing all points * with the same value. It forms a proper t-diget, following all the rules on * centroid sizes, etc. */ static tdigest_t * tdigest_generate(int compression, double value, int64 count) { int64 count_so_far; int64 count_remaining; double denom; double normalizer; int i; tdigest_t *result = tdigest_allocate(compression); denom = 2 * M_PI * count * log(count); normalizer = compression / denom; count_so_far = 0; /* does not include current centroid */ count_remaining = count; /* * Create largest possible centroids, until we run out of items. In each * step we need to find the largest possible well-formed centroid, i.e. one * that matches the two conditions: * * z <= q0 * (1 - q0) where q0 = (count_so_far / count) * * z <= q2 * (1 - q2) where q2 = (count_so_far + X) / count; * * with z = (X * normalizer). X being the value we need to determine. Solving * q0 is trivial, while q2 leads to a quadratic equation with two roots. */ while (count_remaining > 0) { int64 proposed_count; double q0; double a, b, c; double r1, r2; /* solving z <= q0 * (1 - q0) is trivial */ q0 = count_so_far / (double) count; r1 = (q0 * (1 - q0) / normalizer); /* * Solve z <= q2 * (1 - q2) as a quadratic equation. The inequatily we * need to solve is * * 0 <= a * x^2 + b * x + c * * with these coefficients. * * XXX The counts may be very high values (int64), so we need to be * careful to prevent overflows by doing everything with double. */ a = -1; b = ((double) count - 2 * (double) count_so_far - (double) count * (double) count * normalizer); c = ((double) count_so_far * (double) count - (double) count_so_far * (double) count_so_far); /* * As this is an "upside down" parabola, the values between the roots * are positive - we're looking for the largest of the two values. * * XXX Tthe first root should be the higher one, because sqrt is * always positive, so (-b - sqrt()) is smaller and negative, and * we're dividing by negative value. */ r2 = Max((-b - sqrt(b * b - 4 * a * c)) / (2 * a), (-b + sqrt(b * b - 4 * a * c)) / (2 * a)); /* We need to meet both conditions, so use the smaller solution. */ proposed_count = floor(Min(r1, r2)); /* * It's possible to get very low values on the tails, but we must add * at least something, otherwise we'd get infinite loops. */ proposed_count = Max(proposed_count, 1); /* add the centroid and update the added/removed counters */ result->count += proposed_count; result->centroids[result->ncentroids].count = proposed_count; result->centroids[result->ncentroids].mean = value; result->ncentroids++; Assert(result->ncentroids <= compression); count_so_far += proposed_count; count_remaining -= proposed_count; } result->count = 0; for (i = 0; i < result->ncentroids; i++) result->count += result->centroids[i].count; return result; } /* * Add a value with count to the tdigest (create one if needed). Transition * function for tdigest aggregate with a single percentile. */ Datum tdigest_add_double_count(PG_FUNCTION_ARGS) { int64 i; int64 count; tdigest_aggstate_t *state; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_double_count called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } /* if there's no digest allocated, create it now */ if (PG_ARGISNULL(0)) { int compression = PG_GETARG_INT32(3); double *percentiles = NULL; int npercentiles = 0; MemoryContext oldcontext; check_compression(compression); oldcontext = MemoryContextSwitchTo(aggcontext); if (PG_NARGS() >= 5) { percentiles = (double *) palloc(sizeof(double)); percentiles[0] = PG_GETARG_FLOAT8(4); npercentiles = 1; check_percentiles(percentiles, npercentiles); } state = tdigest_aggstate_allocate(npercentiles, 0, compression); if (percentiles) { memcpy(state->percentiles, percentiles, sizeof(double) * npercentiles); pfree(percentiles); } MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); if (PG_ARGISNULL(2)) { count = 1; } else count = PG_GETARG_INT64(2); /* can't add values with non-positive counts */ if (count <= 0) elog(ERROR, "invalid count value %lld, must be a positive value", (long long) count); /* * When adding too many values (than would fit into an empty buffer, and * thus likely causing too many compactions), we instead build a t-digest * and them merge it into the existing state. * * This is much faster, because the t-digest can be generated in one go, * so there can be only one compaction at most. */ if (count > BUFFER_SIZE(state->compression)) { int i; tdigest_t *new; double value = PG_GETARG_FLOAT8(1); new = tdigest_generate(state->compression, value, count); /* XXX maybe not necessary if there's enough space in the buffer */ tdigest_compact(state); for (i = 0; i < new->ncentroids; i++) { centroid_t *s = &new->centroids[i]; state->centroids[state->ncentroids].count = s->count; state->centroids[state->ncentroids].mean = value; state->ncentroids++; state->count += s->count; } count = 0; } /* * If there are only a couple values, just add them one by one, so that * we do proper compaction and sizing of centroids. Otherwise we might end * up with oversized centroid on the tails etc. */ for (i = 0; i < count; i++) tdigest_add(state, PG_GETARG_FLOAT8(1)); PG_RETURN_POINTER(state); } /* * Add a value to the tdigest (create one if needed). Transition function * for tdigest aggregate with a single value. */ Datum tdigest_add_double_values(PG_FUNCTION_ARGS) { tdigest_aggstate_t *state; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_double called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } /* if there's no digest allocated, create it now */ if (PG_ARGISNULL(0)) { int compression = PG_GETARG_INT32(2); double *values = NULL; int nvalues = 0; MemoryContext oldcontext; check_compression(compression); oldcontext = MemoryContextSwitchTo(aggcontext); if (PG_NARGS() >= 4) { values = (double *) palloc(sizeof(double)); values[0] = PG_GETARG_FLOAT8(3); nvalues = 1; } state = tdigest_aggstate_allocate(0, nvalues, compression); if (values) { memcpy(state->values, values, sizeof(double) * nvalues); pfree(values); } MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); tdigest_add(state, PG_GETARG_FLOAT8(1)); PG_RETURN_POINTER(state); } /* * Add a value to the tdigest (create one if needed). Transition function * for tdigest aggregate with a single value. */ Datum tdigest_add_double_values_count(PG_FUNCTION_ARGS) { int64 i; int64 count; tdigest_aggstate_t *state; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_double called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } /* if there's no digest allocated, create it now */ if (PG_ARGISNULL(0)) { int compression = PG_GETARG_INT32(3); double *values = NULL; int nvalues = 0; MemoryContext oldcontext; check_compression(compression); oldcontext = MemoryContextSwitchTo(aggcontext); if (PG_NARGS() >= 5) { values = (double *) palloc(sizeof(double)); values[0] = PG_GETARG_FLOAT8(4); nvalues = 1; } state = tdigest_aggstate_allocate(0, nvalues, compression); if (values) { memcpy(state->values, values, sizeof(double) * nvalues); pfree(values); } MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); if (PG_ARGISNULL(2)) { count = 1; } else count = PG_GETARG_INT64(2); /* can't add values with non-positive counts */ if (count <= 0) elog(ERROR, "invalid count value %lld, must be a positive value", (long long) count); /* * When adding too many values (than would fit into an empty buffer, and * thus likely causing too many compactions), we instead build a t-digest * and them merge it into the existing state. * * This is much faster, because the t-digest can be generated in one go, * so there can be only one compaction at most. */ if (count > BUFFER_SIZE(state->compression)) { int i; tdigest_t *new; double value = PG_GETARG_FLOAT8(1); new = tdigest_generate(state->compression, value, count); /* XXX maybe not necessary if there's enough space in the buffer */ tdigest_compact(state); for (i = 0; i < new->ncentroids; i++) { centroid_t *s = &new->centroids[i]; state->centroids[state->ncentroids].count = s->count; state->centroids[state->ncentroids].mean = value; state->ncentroids++; state->count += s->count; } count = 0; } /* * If there are only a couple values, just add them one by one, so that * we do proper compaction and sizing of centroids. Otherwise we might end * up with oversized centroid on the tails etc. */ for (i = 0; i < count; i++) tdigest_add(state, PG_GETARG_FLOAT8(1)); PG_RETURN_POINTER(state); } /* * Add a value to the tdigest (create one if needed). Transition function * for tdigest aggregate with a single percentile. */ Datum tdigest_add_digest(PG_FUNCTION_ARGS) { int i; tdigest_aggstate_t *state; tdigest_t *digest; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_digest called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } digest = (tdigest_t *) PG_DETOAST_DATUM(PG_GETARG_DATUM(1)); /* make sure we get digest with the new format */ digest = tdigest_update_format(digest); /* make sure the t-digest format is supported */ if (digest->flags != TDIGEST_STORES_MEAN) elog(ERROR, "unsupported t-digest on-disk format"); /* if there's no aggregate state allocated, create it now */ if (PG_ARGISNULL(0)) { double *percentiles = NULL; int npercentiles = 0; MemoryContext oldcontext; oldcontext = MemoryContextSwitchTo(aggcontext); if (PG_NARGS() >= 3) { percentiles = (double *) palloc(sizeof(double)); percentiles[0] = PG_GETARG_FLOAT8(2); npercentiles = 1; check_percentiles(percentiles, npercentiles); } state = tdigest_aggstate_allocate(npercentiles, 0, digest->compression); if (percentiles) { memcpy(state->percentiles, percentiles, sizeof(double) * npercentiles); pfree(percentiles); } MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); /* copy data from the tdigest into the aggstate */ for (i = 0; i < digest->ncentroids; i++) tdigest_add_centroid(state, digest->centroids[i].mean, digest->centroids[i].count); PG_RETURN_POINTER(state); } /* * Add a value to the tdigest (create one if needed). Transition function * for tdigest aggregate with a single value. */ Datum tdigest_add_digest_values(PG_FUNCTION_ARGS) { int i; tdigest_aggstate_t *state; tdigest_t *digest; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_digest called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } digest = (tdigest_t *) PG_DETOAST_DATUM(PG_GETARG_DATUM(1)); /* make sure we get digest with the new format */ digest = tdigest_update_format(digest); /* make sure the t-digest format is supported */ if (digest->flags != TDIGEST_STORES_MEAN) elog(ERROR, "unsupported t-digest on-disk format"); /* if there's no aggregate state allocated, create it now */ if (PG_ARGISNULL(0)) { double *values = NULL; int nvalues = 0; MemoryContext oldcontext; oldcontext = MemoryContextSwitchTo(aggcontext); if (PG_NARGS() >= 3) { values = (double *) palloc(sizeof(double)); values[0] = PG_GETARG_FLOAT8(2); nvalues = 1; } state = tdigest_aggstate_allocate(0, nvalues, digest->compression); if (values) { memcpy(state->values, values, sizeof(double) * nvalues); pfree(values); } MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); for (i = 0; i < digest->ncentroids; i++) tdigest_add_centroid(state, digest->centroids[i].mean, digest->centroids[i].count); PG_RETURN_POINTER(state); } /* * Add a value to the tdigest (create one if needed). Transition function * for tdigest aggregate with an array of percentiles. */ Datum tdigest_add_double_array(PG_FUNCTION_ARGS) { tdigest_aggstate_t *state; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_double_array called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } /* if there's no digest allocated, create it now */ if (PG_ARGISNULL(0)) { int compression = PG_GETARG_INT32(2); double *percentiles; int npercentiles; MemoryContext oldcontext; check_compression(compression); oldcontext = MemoryContextSwitchTo(aggcontext); percentiles = array_to_double(fcinfo, PG_GETARG_ARRAYTYPE_P(3), &npercentiles); check_percentiles(percentiles, npercentiles); state = tdigest_aggstate_allocate(npercentiles, 0, compression); memcpy(state->percentiles, percentiles, sizeof(double) * npercentiles); pfree(percentiles); MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); tdigest_add(state, PG_GETARG_FLOAT8(1)); PG_RETURN_POINTER(state); } /* * Add a value to the tdigest (create one if needed). Transition function * for tdigest aggregate with an array of percentiles. */ Datum tdigest_add_double_array_count(PG_FUNCTION_ARGS) { int64 i; int64 count; tdigest_aggstate_t *state; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_double_array called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } /* if there's no digest allocated, create it now */ if (PG_ARGISNULL(0)) { int compression = PG_GETARG_INT32(3); double *percentiles; int npercentiles; MemoryContext oldcontext; check_compression(compression); oldcontext = MemoryContextSwitchTo(aggcontext); percentiles = array_to_double(fcinfo, PG_GETARG_ARRAYTYPE_P(4), &npercentiles); check_percentiles(percentiles, npercentiles); state = tdigest_aggstate_allocate(npercentiles, 0, compression); memcpy(state->percentiles, percentiles, sizeof(double) * npercentiles); pfree(percentiles); MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); if (PG_ARGISNULL(2)) { count = 1; } else count = PG_GETARG_INT64(2); /* can't add values with non-positive counts */ if (count <= 0) elog(ERROR, "invalid count value %lld, must be a positive value", (long long) count); /* * Add the values one by one, not as one large centroid with the count. * We do it like this to allow proper compaction and sizing of centroids, * otherwise we might end up with oversized centroid on the tails etc. * * XXX If this turns out a bit too expensive, we may try determining the * size by looking for the smallest centroid covering this value. */ for (i = 0; i < count; i++) tdigest_add(state, PG_GETARG_FLOAT8(1)); PG_RETURN_POINTER(state); } /* * Add a value to the tdigest (create one if needed). Transition function * for tdigest aggregate with an array of values. */ Datum tdigest_add_double_array_values(PG_FUNCTION_ARGS) { tdigest_aggstate_t *state; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_double_array called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } /* if there's no digest allocated, create it now */ if (PG_ARGISNULL(0)) { int compression = PG_GETARG_INT32(2); double *values; int nvalues; MemoryContext oldcontext; check_compression(compression); oldcontext = MemoryContextSwitchTo(aggcontext); values = array_to_double(fcinfo, PG_GETARG_ARRAYTYPE_P(3), &nvalues); state = tdigest_aggstate_allocate(0, nvalues, compression); memcpy(state->values, values, sizeof(double) * nvalues); pfree(values); MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); tdigest_add(state, PG_GETARG_FLOAT8(1)); PG_RETURN_POINTER(state); } /* * Add a value to the tdigest (create one if needed). Transition function * for tdigest aggregate with an array of values. */ Datum tdigest_add_double_array_values_count(PG_FUNCTION_ARGS) { int64 i; int64 count; tdigest_aggstate_t *state; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_double_array called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } /* if there's no digest allocated, create it now */ if (PG_ARGISNULL(0)) { int compression = PG_GETARG_INT32(3); double *values; int nvalues; MemoryContext oldcontext; check_compression(compression); oldcontext = MemoryContextSwitchTo(aggcontext); values = array_to_double(fcinfo, PG_GETARG_ARRAYTYPE_P(4), &nvalues); state = tdigest_aggstate_allocate(0, nvalues, compression); memcpy(state->values, values, sizeof(double) * nvalues); pfree(values); MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); if (PG_ARGISNULL(2)) { count = 1; } else count = PG_GETARG_INT64(2); /* can't add values with non-positive counts */ if (count <= 0) elog(ERROR, "invalid count value %lld, must be a positive value", (long long) count); /* * Add the values one by one, not as one large centroid with the count. * We do it like this to allow proper compaction and sizing of centroids, * otherwise we might end up with oversized centroid on the tails etc. * * XXX If this turns out a bit too expensive, we may try determining the * size by looking for the smallest centroid covering this value. */ for (i = 0; i < count; i++) tdigest_add(state, PG_GETARG_FLOAT8(1)); PG_RETURN_POINTER(state); } /* * Add a digest to the tdigest (create one if needed). Transition function * for tdigest aggregate with an array of percentiles. */ Datum tdigest_add_digest_array(PG_FUNCTION_ARGS) { int i; tdigest_aggstate_t *state; tdigest_t *digest; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_digest_array called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } digest = (tdigest_t *) PG_DETOAST_DATUM(PG_GETARG_DATUM(1)); /* make sure we get digest with the new format */ digest = tdigest_update_format(digest); /* make sure the t-digest format is supported */ if (digest->flags != TDIGEST_STORES_MEAN) elog(ERROR, "unsupported t-digest on-disk format"); /* if there's no aggregate state allocated, create it now */ if (PG_ARGISNULL(0)) { double *percentiles; int npercentiles; MemoryContext oldcontext; oldcontext = MemoryContextSwitchTo(aggcontext); percentiles = array_to_double(fcinfo, PG_GETARG_ARRAYTYPE_P(2), &npercentiles); check_percentiles(percentiles, npercentiles); state = tdigest_aggstate_allocate(npercentiles, 0, digest->compression); memcpy(state->percentiles, percentiles, sizeof(double) * npercentiles); pfree(percentiles); MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); for (i = 0; i < digest->ncentroids; i++) tdigest_add_centroid(state, digest->centroids[i].mean, digest->centroids[i].count); PG_RETURN_POINTER(state); } /* * Add a digest to the tdigest (create one if needed). Transition function * for tdigest aggregate with an array of values. */ Datum tdigest_add_digest_array_values(PG_FUNCTION_ARGS) { int i; tdigest_aggstate_t *state; tdigest_t *digest; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_digest_array called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } digest = (tdigest_t *) PG_DETOAST_DATUM(PG_GETARG_DATUM(1)); /* make sure we get digest with the new format */ digest = tdigest_update_format(digest); /* make sure the t-digest format is supported */ if (digest->flags != TDIGEST_STORES_MEAN) elog(ERROR, "unsupported t-digest on-disk format"); /* if there's no aggregate state allocated, create it now */ if (PG_ARGISNULL(0)) { double *values; int nvalues; MemoryContext oldcontext; oldcontext = MemoryContextSwitchTo(aggcontext); values = array_to_double(fcinfo, PG_GETARG_ARRAYTYPE_P(2), &nvalues); state = tdigest_aggstate_allocate(0, nvalues, digest->compression); memcpy(state->values, values, sizeof(double) * nvalues); pfree(values); MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); for (i = 0; i < digest->ncentroids; i++) tdigest_add_centroid(state, digest->centroids[i].mean, digest->centroids[i].count); PG_RETURN_POINTER(state); } /* * Compute percentile from a tdigest. Final function for tdigest aggregate * with a single percentile. */ Datum tdigest_percentiles(PG_FUNCTION_ARGS) { tdigest_aggstate_t *state; MemoryContext aggcontext; double ret; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_percentiles called in non-aggregate context"); /* if there's no digest, return NULL */ if (PG_ARGISNULL(0)) PG_RETURN_NULL(); state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); tdigest_compute_quantiles(state, &ret); PG_RETURN_FLOAT8(ret); } /* * Compute percentile from a tdigest. Final function for tdigest aggregate * with a single percentile. */ Datum tdigest_percentiles_of(PG_FUNCTION_ARGS) { tdigest_aggstate_t *state; MemoryContext aggcontext; double ret; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_percentiles_of called in non-aggregate context"); /* if there's no digest, return NULL */ if (PG_ARGISNULL(0)) PG_RETURN_NULL(); state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); tdigest_compute_quantiles_of(state, &ret); PG_RETURN_FLOAT8(ret); } /* * Build a t-digest varlena value from the aggegate state. */ Datum tdigest_digest(PG_FUNCTION_ARGS) { tdigest_t *digest; tdigest_aggstate_t *state; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_digest called in non-aggregate context"); /* if there's no digest, return NULL */ if (PG_ARGISNULL(0)) PG_RETURN_NULL(); state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); digest = tdigest_aggstate_to_digest(state, true); PG_RETURN_POINTER(digest); } /* * Compute percentiles from a tdigest. Final function for tdigest aggregate * with an array of percentiles. */ Datum tdigest_array_percentiles(PG_FUNCTION_ARGS) { double *result; MemoryContext aggcontext; tdigest_aggstate_t *state; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_array_percentiles called in non-aggregate context"); if (PG_ARGISNULL(0)) PG_RETURN_NULL(); state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); result = palloc(state->npercentiles * sizeof(double)); tdigest_compute_quantiles(state, result); return double_to_array(fcinfo, result, state->npercentiles); } /* * Compute percentiles from a tdigest. Final function for tdigest aggregate * with an array of values. */ Datum tdigest_array_percentiles_of(PG_FUNCTION_ARGS) { double *result; MemoryContext aggcontext; tdigest_aggstate_t *state; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_array_percentiles_of called in non-aggregate context"); if (PG_ARGISNULL(0)) PG_RETURN_NULL(); state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); result = palloc(state->nvalues * sizeof(double)); tdigest_compute_quantiles_of(state, result); return double_to_array(fcinfo, result, state->nvalues); } Datum tdigest_serial(PG_FUNCTION_ARGS) { bytea *v; tdigest_aggstate_t *state; Size len; char *ptr; state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); len = offsetof(tdigest_aggstate_t, percentiles) + state->npercentiles * sizeof(double) + state->nvalues * sizeof(double) + state->ncentroids * sizeof(centroid_t); v = palloc(len + VARHDRSZ); SET_VARSIZE(v, len + VARHDRSZ); ptr = VARDATA(v); memcpy(ptr, state, offsetof(tdigest_aggstate_t, percentiles)); ptr += offsetof(tdigest_aggstate_t, percentiles); if (state->npercentiles > 0) { memcpy(ptr, state->percentiles, sizeof(double) * state->npercentiles); ptr += sizeof(double) * state->npercentiles; } if (state->nvalues > 0) { memcpy(ptr, state->values, sizeof(double) * state->nvalues); ptr += sizeof(double) * state->nvalues; } /* FIXME maybe don't serialize full centroids, but just sum/count */ memcpy(ptr, state->centroids, sizeof(centroid_t) * state->ncentroids); ptr += sizeof(centroid_t) * state->ncentroids; Assert(VARDATA(v) + len == ptr); PG_RETURN_POINTER(v); } Datum tdigest_deserial(PG_FUNCTION_ARGS) { bytea *v = (bytea *) PG_GETARG_POINTER(0); char *ptr = VARDATA_ANY(v); tdigest_aggstate_t tmp; tdigest_aggstate_t *state; double *percentiles = NULL; double *values = NULL; /* copy aggstate header into a local variable */ memcpy(&tmp, ptr, offsetof(tdigest_aggstate_t, percentiles)); ptr += offsetof(tdigest_aggstate_t, percentiles); /* allocate and copy percentiles */ if (tmp.npercentiles > 0) { percentiles = palloc(tmp.npercentiles * sizeof(double)); memcpy(percentiles, ptr, tmp.npercentiles * sizeof(double)); ptr += tmp.npercentiles * sizeof(double); } /* allocate and copy values */ if (tmp.nvalues > 0) { values = palloc(tmp.nvalues * sizeof(double)); memcpy(values, ptr, tmp.nvalues * sizeof(double)); ptr += tmp.nvalues * sizeof(double); } state = tdigest_aggstate_allocate(tmp.npercentiles, tmp.nvalues, tmp.compression); if (tmp.npercentiles > 0) { memcpy(state->percentiles, percentiles, tmp.npercentiles * sizeof(double)); pfree(percentiles); } if (tmp.nvalues > 0) { memcpy(state->values, values, tmp.nvalues * sizeof(double)); pfree(values); } /* copy the data into the newly-allocated state */ memcpy(state, &tmp, offsetof(tdigest_aggstate_t, percentiles)); /* we don't need to move the pointer */ /* copy the centroids back */ memcpy(state->centroids, ptr, sizeof(centroid_t) * state->ncentroids); ptr += sizeof(centroid_t) * state->ncentroids; PG_RETURN_POINTER(state); } static tdigest_aggstate_t * tdigest_copy(tdigest_aggstate_t *state) { tdigest_aggstate_t *copy; copy = tdigest_aggstate_allocate(state->npercentiles, state->nvalues, state->compression); memcpy(copy, state, offsetof(tdigest_aggstate_t, percentiles)); if (state->nvalues > 0) memcpy(copy->values, state->values, sizeof(double) * state->nvalues); if (state->npercentiles > 0) memcpy(copy->percentiles, state->percentiles, sizeof(double) * state->npercentiles); memcpy(copy->centroids, state->centroids, state->ncentroids * sizeof(centroid_t)); return copy; } Datum tdigest_combine(PG_FUNCTION_ARGS) { tdigest_aggstate_t *src; tdigest_aggstate_t *dst; MemoryContext aggcontext; MemoryContext oldcontext; if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_combine called in non-aggregate context"); /* if no "merged" state yet, try creating it */ if (PG_ARGISNULL(0)) { /* nope, the second argument is NULL to, so return NULL */ if (PG_ARGISNULL(1)) PG_RETURN_NULL(); /* the second argument is not NULL, so copy it */ src = (tdigest_aggstate_t *) PG_GETARG_POINTER(1); /* copy the digest into the right long-lived memory context */ oldcontext = MemoryContextSwitchTo(aggcontext); src = tdigest_copy(src); MemoryContextSwitchTo(oldcontext); PG_RETURN_POINTER(src); } /* * If the second argument is NULL, just return the first one (we know * it's not NULL at this point). */ if (PG_ARGISNULL(1)) PG_RETURN_DATUM(PG_GETARG_DATUM(0)); /* Now we know neither argument is NULL, so merge them. */ src = (tdigest_aggstate_t *) PG_GETARG_POINTER(1); dst = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); /* * Do a compaction on each digest, to make sure we have enough space. * * XXX Maybe do this only when necessary, i.e. when we can't fit the * data into the dst digest? Also, is it really ensured this gives us * enough free space? */ tdigest_compact(dst); tdigest_compact(src); AssertCheckTDigestAggState(dst); AssertCheckTDigestAggState(src); /* copy the second part */ memcpy(&dst->centroids[dst->ncentroids], src->centroids, src->ncentroids * sizeof(centroid_t)); dst->ncentroids += src->ncentroids; dst->count += src->count; /* mark the digest as not compacted */ dst->ncompacted = 0; AssertCheckTDigestAggState(dst); PG_RETURN_POINTER(dst); } /* API for incremental updates */ /* * expand the t-digest into an in-memory aggregate state */ static tdigest_aggstate_t * tdigest_digest_to_aggstate(tdigest_t *digest) { int i; tdigest_aggstate_t *state; /* make sure we get digest with the new format */ digest = tdigest_update_format(digest); /* make sure the t-digest format is supported */ if (digest->flags != TDIGEST_STORES_MEAN) elog(ERROR, "unsupported t-digest on-disk format"); state = tdigest_aggstate_allocate(0, 0, digest->compression); /* copy data from the tdigest into the aggstate */ for (i = 0; i < digest->ncentroids; i++) tdigest_add_centroid(state, digest->centroids[i].mean, digest->centroids[i].count); return state; } /* * Add a single value to the t-digest. This is not very efficient, as it has * to deserialize the t-digest into the in-memory aggstate representation * and serialize it back for each call, but it's convenient and acceptable * for some use cases. * * When efficiency is important, it may be possible to use the batch variant * with first aggregating the updates into a t-digest, and then merge that * into an existing t-digest in one step using tdigest_union_double_increment * * This is similar to hll_add, while the "union" is more like hll_union. */ Datum tdigest_add_double_increment(PG_FUNCTION_ARGS) { tdigest_aggstate_t *state; bool compact = PG_GETARG_BOOL(3); /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } /* if there's no digest allocated, create it now */ if (PG_ARGISNULL(0)) { int compression; /* * We don't require compression, but only when there is an existing * t-digest value. Make sure the value was supplied. */ if (PG_ARGISNULL(2)) elog(ERROR, "compression value not supplied, but t-digest is NULL"); compression = PG_GETARG_INT32(2); check_compression(compression); state = tdigest_aggstate_allocate(0, 0, compression); } else state = tdigest_digest_to_aggstate(PG_GETARG_TDIGEST(0)); tdigest_add(state, PG_GETARG_FLOAT8(1)); PG_RETURN_POINTER(tdigest_aggstate_to_digest(state, compact)); } /* * Add an array of values to the t-digest. This amortizes the overhead of * deserializing and serializing the t-digest, compared to the per-value * version. * * When efficiency is important, it may be possible to use the batch variant * with first aggregating the updates into a t-digest, and then merge that * into an existing t-digest in one step using tdigest_union_double_increment * * This is similar to hll_add, while the "union" is more like hll_union. */ Datum tdigest_add_double_array_increment(PG_FUNCTION_ARGS) { tdigest_aggstate_t *state; bool compact = PG_GETARG_BOOL(3); double *values; int nvalues; int i; /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } /* if there's no digest allocated, create it now */ if (PG_ARGISNULL(0)) { int compression; /* * We don't require compression, but only when there is an existing * t-digest value. Make sure the value was supplied. */ if (PG_ARGISNULL(2)) elog(ERROR, "compression value not supplied, but t-digest is NULL"); compression = PG_GETARG_INT32(2); check_compression(compression); state = tdigest_aggstate_allocate(0, 0, compression); } else state = tdigest_digest_to_aggstate(PG_GETARG_TDIGEST(0)); values = array_to_double(fcinfo, PG_GETARG_ARRAYTYPE_P(1), &nvalues); for (i = 0; i < nvalues; i++) tdigest_add(state, values[i]); PG_RETURN_POINTER(tdigest_aggstate_to_digest(state, compact)); } /* * Merge a t-digest into another t-digest. This is somewaht inefficient, as * it has to deserialize the t-digests into the in-memory aggstate values, * and serialize it back for each call, but it's better than doing it for * each individual value (like tdigest_union_double_increment). * * This is similar to hll_union. */ Datum tdigest_union_double_increment(PG_FUNCTION_ARGS) { int i; tdigest_aggstate_t *state; tdigest_t *digest; bool compact = PG_GETARG_BOOL(2); if (PG_ARGISNULL(0) && PG_ARGISNULL(1)) PG_RETURN_NULL(); else if (PG_ARGISNULL(0)) PG_RETURN_POINTER(PG_GETARG_POINTER(1)); else if (PG_ARGISNULL(1)) PG_RETURN_POINTER(PG_GETARG_POINTER(0)); /* now we know both arguments are non-null */ /* parse the first digest (we'll merge the other one into this) */ state = tdigest_digest_to_aggstate(PG_GETARG_TDIGEST(0)); AssertCheckTDigestAggState(state); /* parse the second digest */ digest = PG_GETARG_TDIGEST(1); AssertCheckTDigest(digest); /* copy data from the tdigest into the aggstate */ for (i = 0; i < digest->ncentroids; i++) tdigest_add_centroid(state, digest->centroids[i].mean, digest->centroids[i].count); AssertCheckTDigestAggState(state); PG_RETURN_POINTER(tdigest_aggstate_to_digest(state, compact)); } /* * Comparator, ordering the centroids by mean value. * * When the mean is the same, we try ordering the centroids by count. * * In principle, centroids with the same mean represent the same value, * but we still need to care about the count to allow rebalancing the * centroids later. */ static int centroid_cmp(const void *a, const void *b) { double ma, mb; centroid_t *ca = (centroid_t *) a; centroid_t *cb = (centroid_t *) b; ma = ca->mean; mb = cb->mean; if (ma < mb) return -1; else if (ma > mb) return 1; if (ca->count < cb->count) return -1; else if (ca->count > cb->count) return 1; return 0; } Datum tdigest_in(PG_FUNCTION_ARGS) { int i, r; char *str = PG_GETARG_CSTRING(0); tdigest_t *digest = NULL; /* t-digest header fields */ int32 flags; int64 count, total_count; int compression; int ncentroids; int header_length; char *ptr; r = sscanf(str, "flags %d count " INT64_FORMAT " compression %d centroids %d%n", &flags, &count, &compression, &ncentroids, &header_length); if (r != 4) elog(ERROR, "failed to parse t-digest value"); if ((compression < 10) || (compression > 10000)) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("compression for t-digest must be in [10, 10000]"))); if (count <= 0) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("count value for the t-digest must be positive"))); if (ncentroids <= 0) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("number of centroids for the t-digest must be positive"))); if (ncentroids > BUFFER_SIZE(compression)) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("number of centroids for the t-digest exceeds buffer size"))); digest = tdigest_allocate(ncentroids); digest->flags = flags; digest->count = count; digest->ncentroids = ncentroids; digest->compression = compression; ptr = str + header_length; total_count = 0; for (i = 0; i < digest->ncentroids; i++) { double mean; if (sscanf(ptr, " (%lf, " INT64_FORMAT ")", &mean, &count) != 2) elog(ERROR, "failed to parse centroid"); digest->centroids[i].count = count; digest->centroids[i].mean = mean; if (count <= 0) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("count value for all centroids in a t-digest must be positive"))); else if (count > digest->count) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("count value of a centroid exceeds total count"))); /* the centroids should be sorted by mean */ if (i > 0) { double mean_prev = digest->centroids[i-1].mean; if (!(flags & TDIGEST_STORES_MEAN)) { mean = (mean / digest->centroids[i].count); mean_prev = (mean_prev / digest->centroids[i-1].count); } if (mean_prev > mean) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("centroids not sorted by mean"))); } /* track the total count so that we can check later */ total_count += count; /* skip to the end of the centroid */ ptr = strchr(ptr, ')') + 1; } Assert(ptr == str + strlen(str)); /* check that the total matches */ if (total_count != digest->count) ereport(ERROR, (errcode(ERRCODE_INVALID_PARAMETER_VALUE), errmsg("total count does not match the data (%lld != %lld)", (long long) total_count, (long long) digest->count))); /* * Make sure we return digest with the new format (it might be the * old format, in which case "mean" fields actually store "sum"). */ digest = tdigest_update_format(digest); AssertCheckTDigest(digest); PG_RETURN_POINTER(digest); } Datum tdigest_out(PG_FUNCTION_ARGS) { int i; tdigest_t *digest = (tdigest_t *) PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); StringInfoData str; AssertCheckTDigest(digest); initStringInfo(&str); appendStringInfo(&str, "flags %d count " INT64_FORMAT " compression %d centroids %d", digest->flags, digest->count, digest->compression, digest->ncentroids); /* * If this is an old tdigest with sum values, we'll send those, and * it's up to the reader to fix it. It'll be indicated by not having * the TDIGEST_STORES_MEAN flag. */ for (i = 0; i < digest->ncentroids; i++) appendStringInfo(&str, " (%lf, " INT64_FORMAT ")", digest->centroids[i].mean, digest->centroids[i].count); PG_RETURN_CSTRING(str.data); } Datum tdigest_recv(PG_FUNCTION_ARGS) { StringInfo buf = (StringInfo) PG_GETARG_POINTER(0); tdigest_t *digest; int i; int64 count; int32 flags; int32 compression; int32 ncentroids; flags = pq_getmsgint(buf, sizeof(int32)); /* make sure the t-digest format is supported */ if ((flags != 0) && (flags != TDIGEST_STORES_MEAN)) elog(ERROR, "unsupported t-digest on-disk format"); count = pq_getmsgint64(buf); compression = pq_getmsgint(buf, sizeof(int32)); ncentroids = pq_getmsgint(buf, sizeof(int32)); digest = tdigest_allocate(ncentroids); digest->flags = flags; digest->count = count; digest->compression = compression; digest->ncentroids = ncentroids; for (i = 0; i < digest->ncentroids; i++) { digest->centroids[i].mean = pq_getmsgfloat8(buf); digest->centroids[i].count = pq_getmsgint64(buf); } /* * Make sure we return digest with the new format (it might be the * old format, in which case "mean" fields actually store "sum"). */ digest = tdigest_update_format(digest); PG_RETURN_POINTER(digest); } Datum tdigest_send(PG_FUNCTION_ARGS) { tdigest_t *digest = (tdigest_t *) PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); StringInfoData buf; int i; pq_begintypsend(&buf); pq_sendint(&buf, digest->flags, 4); pq_sendint64(&buf, digest->count); pq_sendint(&buf, digest->compression, 4); pq_sendint(&buf, digest->ncentroids, 4); for (i = 0; i < digest->ncentroids; i++) { pq_sendfloat8(&buf, digest->centroids[i].mean); pq_sendint64(&buf, digest->centroids[i].count); } PG_RETURN_BYTEA_P(pq_endtypsend(&buf)); } Datum tdigest_count(PG_FUNCTION_ARGS) { tdigest_t *digest = (tdigest_t *) PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); PG_RETURN_INT64(digest->count); } /* * tdigest_to_json * Transform the tdigest into a JSON value. * * We make sure to always print mean, even for tdigests in the older format * storing sum for centroids. Otherwise the "mean" key would be confusing. * But we don't call tdigest_update_format, and instead we simply update the * flags and convert the sum/mean values. * * The centroids are stored in two separate arrays - one for means, one for * counts. That makes it easier to process, because it's clear the i-th * in each array is for i-th centroid. We might store it in a single array, * but then we'd have to walk it in pairs. And it'd mix float and int * values in the same array. */ Datum tdigest_to_json(PG_FUNCTION_ARGS) { int i; StringInfoData str; tdigest_t *digest = (tdigest_t *) PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); int32 flags = digest->flags; initStringInfo(&str); appendStringInfoChar(&str, '{'); flags |= TDIGEST_STORES_MEAN; appendStringInfo(&str, "\"flags\": %d, ", flags); appendStringInfo(&str, "\"count\": " INT64_FORMAT ", ", digest->count); appendStringInfo(&str, "\"compression\": %d, ", digest->compression); appendStringInfo(&str, "\"centroids\": %d, ", digest->ncentroids); appendStringInfoString(&str, "\"mean\": ["); for (i = 0; i < digest->ncentroids; i++) { double mean = digest->centroids[i].mean; if (i > 0) appendStringInfoString(&str, ", "); /* * When the TDIGEST_STORES_MEAN flags is not set, the value is * actually a sum, so convert it to mean now. We have to check the * diget->flags, not the local variable. */ if (! (digest->flags && TDIGEST_STORES_MEAN)) mean = mean / digest->centroids[i].count; /* don't print insignificant zeroes to the right of decimal point */ appendStringInfo(&str, "%g", mean); } appendStringInfoString(&str, "], "); appendStringInfoString(&str, "\"count\": ["); for (i = 0; i < digest->ncentroids; i++) { if (i > 0) appendStringInfoString(&str, ", "); appendStringInfo(&str, INT64_FORMAT, digest->centroids[i].count); } appendStringInfoString(&str, "]"); appendStringInfoChar(&str, '}'); PG_RETURN_TEXT_P(cstring_to_text(str.data)); } /* * tdigest_to_array * Transform the tdigest into an array of double values. * * The whole digest is stored in a single "double precision" array, which * may be a bit confusing and perhaps fragile if more fields need to be * added in the future. The initial elements are flags, count (number of * items added to the digest), compression (determines the limit on number * of centroids) and current number of centroids. Follows stream of values * encoding the centroids in pairs of (mean, count). * * We make sure to always print mean, even for tdigests in the older format * storing sum for centroids. Otherwise the "mean" key would be confusing. * But we don't call tdigest_update_format, and instead we simply update the * flags and convert the sum/mean values. */ Datum tdigest_to_array(PG_FUNCTION_ARGS) { int i, idx; tdigest_t *digest = (tdigest_t *) PG_DETOAST_DATUM(PG_GETARG_DATUM(0)); int32 flags = digest->flags; double *values; int nvalues; flags |= TDIGEST_STORES_MEAN; /* number of values to store in the array */ nvalues = 4 + (digest->ncentroids * 2); values = (double *) palloc(sizeof(double) * nvalues); idx = 0; values[idx++] = flags; values[idx++] = digest->count; values[idx++] = digest->compression; values[idx++] = digest->ncentroids; for (i = 0; i < digest->ncentroids; i++) { double mean = digest->centroids[i].mean; /* * When the TDIGEST_STORES_MEAN flags is not set, the value is * actually a sum, so convert it to mean now. We have to check the * diget->flags, not the local variable. */ if (! (digest->flags && TDIGEST_STORES_MEAN)) mean = mean / digest->centroids[i].count; /* don't print insignificant zeroes to the right of decimal point */ values[idx++] = mean; values[idx++] = digest->centroids[i].count; } Assert(idx == nvalues); return double_to_array(fcinfo, values, nvalues); } Datum tdigest_add_double_trimmed(PG_FUNCTION_ARGS) { tdigest_aggstate_t *state; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_double_mean called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } /* if there's no digest allocated, create it now */ if (PG_ARGISNULL(0)) { MemoryContext oldcontext; int compression = PG_GETARG_INT32(2); double low = PG_GETARG_FLOAT8(3); double high = PG_GETARG_FLOAT8(4); check_compression(compression); check_trim_values(low, high); oldcontext = MemoryContextSwitchTo(aggcontext); state = tdigest_aggstate_allocate(0, 0, compression); state->trim_low = low; state->trim_high = high; MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); tdigest_add(state, PG_GETARG_FLOAT8(1)); PG_RETURN_POINTER(state); } Datum tdigest_add_double_count_trimmed(PG_FUNCTION_ARGS) { int i; int64 count; tdigest_aggstate_t *state; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_double_mean called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } /* if there's no digest allocated, create it now */ if (PG_ARGISNULL(0)) { MemoryContext oldcontext; int compression = PG_GETARG_INT32(3); double low = PG_GETARG_FLOAT8(4); double high = PG_GETARG_FLOAT8(5); check_compression(compression); check_trim_values(low, high); oldcontext = MemoryContextSwitchTo(aggcontext); state = tdigest_aggstate_allocate(0, 0, compression); state->trim_low = low; state->trim_high = high; MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); if (PG_ARGISNULL(2)) count = 1; else count = PG_GETARG_INT64(2); /* can't add values with non-positive counts */ if (count <= 0) elog(ERROR, "invalid count value %lld, must be a positive value", (long long) count); /* * When adding too many values (than would fit into an empty buffer, and * thus likely causing too many compactions), we instead build a t-digest * and them merge it into the existing state. * * This is much faster, because the t-digest can be generated in one go, * so there can be only one compaction at most. */ if (count > BUFFER_SIZE(state->compression)) { tdigest_t *new; double value = PG_GETARG_FLOAT8(1); new = tdigest_generate(state->compression, value, count); /* XXX maybe not necessary if there's enough space in the buffer */ tdigest_compact(state); for (i = 0; i < new->ncentroids; i++) { centroid_t *s = &new->centroids[i]; state->centroids[state->ncentroids].count = s->count; state->centroids[state->ncentroids].mean = value; state->ncentroids++; state->count += s->count; } count = 0; } /* * If there are only a couple values, just add them one by one, so that * we do proper compaction and sizing of centroids. Otherwise we might end * up with oversized centroid on the tails etc. */ for (i = 0; i < count; i++) tdigest_add(state, PG_GETARG_FLOAT8(1)); PG_RETURN_POINTER(state); } /* * Add a value to the tdigest (create one if needed). Transition function * for tdigest aggregate with a single value. */ Datum tdigest_add_digest_trimmed(PG_FUNCTION_ARGS) { int i; tdigest_aggstate_t *state; tdigest_t *digest; MemoryContext aggcontext; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_add_digest called in non-aggregate context"); /* * We want to skip NULL values altogether - we return either the existing * t-digest (if it already exists) or NULL. */ if (PG_ARGISNULL(1)) { if (PG_ARGISNULL(0)) PG_RETURN_NULL(); /* if there already is a state accumulated, don't forget it */ PG_RETURN_DATUM(PG_GETARG_DATUM(0)); } digest = (tdigest_t *) PG_DETOAST_DATUM(PG_GETARG_DATUM(1)); /* make sure we get digest with the new format */ digest = tdigest_update_format(digest); /* make sure the t-digest format is supported */ if (digest->flags != TDIGEST_STORES_MEAN) elog(ERROR, "unsupported t-digest on-disk format"); /* if there's no aggregate state allocated, create it now */ if (PG_ARGISNULL(0)) { MemoryContext oldcontext; double low = PG_GETARG_FLOAT8(2); double high = PG_GETARG_FLOAT8(3); check_trim_values(low, high); oldcontext = MemoryContextSwitchTo(aggcontext); state = tdigest_aggstate_allocate(0, 0, digest->compression); state->trim_low = low; state->trim_high = high; MemoryContextSwitchTo(oldcontext); } else state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); for (i = 0; i < digest->ncentroids; i++) tdigest_add_centroid(state, digest->centroids[i].mean, digest->centroids[i].count); PG_RETURN_POINTER(state); } /* * Calculate trimmed aggregates from centroids. */ static void tdigest_trimmed_agg(centroid_t *centroids, int ncentroids, int64 count, double low, double high, double *sump, int64 *countp) { int i; double sum = 0; int64 count_done = 0, count_low, count_high; /* translate the percentiles to counts */ count_low = floor(count * low); count_high = ceil(count * high); count = 0; for (i = 0; i < ncentroids; i++) { int64 count_add = 0; /* Assume the whole centroid falls into the range. */ count_add = centroids[i].count; /* * If we haven't reached the low threshold yet, skip appropriate * part of the centroid. */ count_add -= Min(Max(0, count_low - count_done), count_add); /* * If we have reached the upper threshold, ignore the overflowing * part of the centroid. */ count_add = Min(Max(0, count_high - count_done), count_add); /* consider the whole centroid processed */ count_done += centroids[i].count; /* increment the sum / count */ sum += centroids[i].mean * count_add; count += count_add; /* break once we cross the high threshold */ if (count_done >= count_high) break; } *sump = sum; *countp = count; } /* * Compute percentile from a tdigest. Final function for tdigest aggregate * with a single percentile. */ Datum tdigest_trimmed_avg(PG_FUNCTION_ARGS) { tdigest_aggstate_t *state; MemoryContext aggcontext; double sum; int64 count; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_percentiles called in non-aggregate context"); /* if there's no digest, return NULL */ if (PG_ARGISNULL(0)) PG_RETURN_NULL(); state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); tdigest_trimmed_agg(state->centroids, state->ncentroids, state->count, state->trim_low, state->trim_high, &sum, &count); if (count > 0) PG_RETURN_FLOAT8(sum / count); PG_RETURN_NULL(); } /* * Compute percentile from a tdigest. Final function for tdigest aggregate * with a single percentile. */ Datum tdigest_trimmed_sum(PG_FUNCTION_ARGS) { tdigest_aggstate_t *state; MemoryContext aggcontext; double sum; int64 count; /* cannot be called directly because of internal-type argument */ if (!AggCheckCallContext(fcinfo, &aggcontext)) elog(ERROR, "tdigest_percentiles called in non-aggregate context"); /* if there's no digest, return NULL */ if (PG_ARGISNULL(0)) PG_RETURN_NULL(); state = (tdigest_aggstate_t *) PG_GETARG_POINTER(0); tdigest_trimmed_agg(state->centroids, state->ncentroids, state->count, state->trim_low, state->trim_high, &sum, &count); if (count > 0) PG_RETURN_FLOAT8(sum); PG_RETURN_NULL(); } /* * Trimmed sum of a single digest (non-aggregate function). */ Datum tdigest_digest_sum(PG_FUNCTION_ARGS) { tdigest_t *digest = PG_GETARG_TDIGEST(0); double low = PG_GETARG_FLOAT8(1); double high = PG_GETARG_FLOAT8(2); double sum; int64 count; AssertCheckTDigest(digest); tdigest_trimmed_agg(digest->centroids, digest->ncentroids, digest->count, low, high, &sum, &count); if (count > 0) PG_RETURN_FLOAT8(sum); PG_RETURN_NULL(); } /* * Trimmed average of a single digest (non-aggregate function) */ Datum tdigest_digest_avg(PG_FUNCTION_ARGS) { tdigest_t *digest = PG_GETARG_TDIGEST(0); double low = PG_GETARG_FLOAT8(1); double high = PG_GETARG_FLOAT8(2); double sum; int64 count; AssertCheckTDigest(digest); tdigest_trimmed_agg(digest->centroids, digest->ncentroids, digest->count, low, high, &sum, &count); if (count > 0) PG_RETURN_FLOAT8(sum / count); PG_RETURN_NULL(); } /* * Transform an input FLOAT8 SQL array to a plain double C array. * * This expects a single-dimensional float8 array, fails otherwise. */ static double * array_to_double(FunctionCallInfo fcinfo, ArrayType *v, int *len) { double *result; int nitems, *dims, ndims; Oid element_type; int16 typlen; bool typbyval; char typalign; int i; /* deconstruct_array */ Datum *elements; bool *nulls; int nelements; ndims = ARR_NDIM(v); dims = ARR_DIMS(v); nitems = ArrayGetNItems(ndims, dims); /* this is a special-purpose function for single-dimensional arrays */ if (ndims != 1) elog(ERROR, "expected a single-dimensional array (dims = %d)", ndims); /* * if there are no elements, set the length to 0 and return NULL * * XXX Can this actually happen? for empty arrays we seem to error out * on the preceding check, i.e. ndims = 0. */ if (nitems == 0) { (*len) = 0; return NULL; } element_type = ARR_ELEMTYPE(v); /* XXX not sure if really needed (can it actually happen?) */ if (element_type != FLOAT8OID) elog(ERROR, "array_to_double expects FLOAT8 array"); /* allocate space for enough elements */ result = (double*) palloc(nitems * sizeof(double)); get_typlenbyvalalign(element_type, &typlen, &typbyval, &typalign); deconstruct_array(v, element_type, typlen, typbyval, typalign, &elements, &nulls, &nelements); /* we should get the same counts here */ Assert(nelements == nitems); for (i = 0; i < nelements; i++) { if (nulls[i]) elog(ERROR, "NULL not allowed as a percentile value"); result[i] = DatumGetFloat8(elements[i]); } (*len) = nelements; return result; } /* * construct an SQL array from a simple C double array */ static Datum double_to_array(FunctionCallInfo fcinfo, double *d, int len) { ArrayBuildState *astate = NULL; int i; for (i = 0; i < len; i++) { /* stash away this field */ astate = accumArrayResult(astate, Float8GetDatum(d[i]), false, FLOAT8OID, CurrentMemoryContext); } PG_RETURN_ARRAYTYPE_P(DatumGetPointer(makeArrayResult(astate, CurrentMemoryContext))); } tdigest-1.4.1/tdigest.control000066400000000000000000000001361450426374500162370ustar00rootroot00000000000000comment = 'Provides tdigest aggregate function.' default_version = '1.4.1' relocatable = true tdigest-1.4.1/test/000077500000000000000000000000001450426374500141515ustar00rootroot00000000000000tdigest-1.4.1/test/expected/000077500000000000000000000000001450426374500157525ustar00rootroot00000000000000tdigest-1.4.1/test/expected/basic.out000066400000000000000000001220151450426374500175650ustar00rootroot00000000000000\set ECHO none -- SRF function implementing a simple deterministict PRNG CREATE OR REPLACE FUNCTION prng(nrows int, seed int = 23982, p1 bigint = 16807, p2 bigint = 0, n bigint = 2147483647) RETURNS SETOF double precision AS $$ DECLARE val INT := seed; BEGIN FOR i IN 1..nrows LOOP val := (val * p1 + p2) % n; RETURN NEXT (val::double precision / n); END LOOP; RETURN; END; $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION random_normal(nrows int, mean double precision = 0.5, stddev double precision = 0.1, minval double precision = 0.0, maxval double precision = 1.0, seed int = 23982, p1 bigint = 16807, p2 bigint = 0, n bigint = 2147483647) RETURNS SETOF double precision AS $$ DECLARE v BIGINT := seed; x DOUBLE PRECISION; y DOUBLE PRECISION; s DOUBLE PRECISION; r INT := nrows; BEGIN WHILE true LOOP -- random x v := (v * p1 + p2) % n; x := 2 * v / n::double precision - 1.0; -- random y v := (v * p1 + p2) % n; y := 2 * v / n::double precision - 1.0; s := x^2 + y^2; IF s != 0.0 AND s < 1.0 THEN s = sqrt(-2 * ln(s) / s); x := mean + stddev * s * x; IF x >= minval AND x <= maxval THEN RETURN NEXT x; r := r - 1; END IF; EXIT WHEN r = 0; y := mean + stddev * s * y; IF y >= minval AND y <= maxval THEN RETURN NEXT y; r := r - 1; END IF; EXIT WHEN r = 0; END IF; END LOOP; END; $$ LANGUAGE plpgsql; DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; ----------------------------------------------------------- -- nice data set with ordered (asc) / evenly-spaced data -- ----------------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ------------------------------------------------------------ -- nice data set with ordered (desc) / evenly-spaced data -- ------------------------------------------------------------ -- 10 centroids (tiny) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ---------------------------------------------------- -- nice data set with random / evenly-spaced data -- ---------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ---------------------------------------------- -- nice data set with random data (uniform) -- ---------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT x FROM prng(100000) s(x)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT x FROM prng(100000) s(x)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT x FROM prng(100000) s(x)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT x FROM prng(100000) s(x)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT x FROM prng(100000) s(x)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT x FROM prng(100000) s(x)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -------------------------------------------------- -- nice data set with random data (skewed sqrt) -- -------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ------------------------------------------------------- -- nice data set with random data (skewed sqrt+sqrt) -- ------------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ------------------------------------------------- -- nice data set with random data (skewed pow) -- ------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.005, -- arbitrary threshold of 0.5% (CASE WHEN abs(a - b) < 0.005 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ----------------------------------------------------- -- nice data set with random data (skewed pow+pow) -- ----------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ---------------------------------------------------------- -- nice data set with random data (normal distribution) -- ---------------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)) SELECT p, abs(a - b) < 0.025, -- arbitrary threshold of 2.5% (CASE WHEN abs(a - b) < 0.025 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- some basic tests to verify transforming from and to text work -- 10 centroids (tiny) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)), intermediate AS (SELECT tdigest(x, 10)::text AS intermediate_x FROM data), tdigest_parsed AS (SELECT tdigest_percentile(intermediate_x::tdigest, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS a FROM intermediate), pg_percentile AS (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) AS b FROM data) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM tdigest_parsed, pg_percentile ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- verify we can store tdigest in a summary table CREATE TABLE intermediate_tdigest (grouping int, summary tdigest); WITH data AS (SELECT row_number() OVER () AS i, pow(z, 4) AS x FROM random_normal(100000) s(z)) INSERT INTO intermediate_tdigest SELECT i % 10 AS grouping, tdigest(x, 100) AS summary FROM data GROUP BY i % 10; WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)), intermediate AS (SELECT tdigest_percentile(summary, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS a FROM intermediate_tdigest), pg_percentile AS (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) AS b FROM data) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM intermediate, pg_percentile ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- verify 'extreme' percentiles for the dataset would not read out of bounds on the centroids WITH data AS (SELECT x FROM generate_series(1,10) AS x) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% given the small dataset and extreme percentiles it is not very accurate (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.99 | t | (2 rows) -- check that the computed percentiles are perfectly correlated (don't decrease for higher p values) -- first test on a tiny t-digest with all centroids having count = 1 WITH -- percentiles to compute perc AS (SELECT array_agg((i / 100.0)::double precision) AS percentiles FROM generate_series(1,99) s(i)), -- input data (just 15 points) input_data AS (select i::double precision AS val FROM generate_series(1,15) s(i)) SELECT * FROM ( SELECT p, v AS v1, lag(v, 1) OVER (ORDER BY p) v2 FROM ( SELECT unnest(perc.percentiles) p, unnest(tdigest_percentile(input_data.val, 100, perc.percentiles)) v FROM perc, input_data GROUP BY perc.percentiles ) foo ) bar where v2 > v1; p | v1 | v2 ---+----+---- (0 rows) tdigest-1.4.1/test/expected/cast.out000066400000000000000000000153551450426374500174460ustar00rootroot00000000000000-- test casting to json SELECT cast(tdigest(i / 1000.0, 10) as json) from generate_series(1,1000) s(i); tdigest --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- {"flags": 1, "count": 1000, "compression": 10, "centroids": 13, "mean": [0.001, 0.002, 0.0045, 0.013, 0.0405, 0.135, 0.464, 0.793, 0.916, 0.9795, 0.996, 0.999, 1], "count": [1, 1, 4, 13, 42, 147, 511, 147, 99, 28, 5, 1, 1]} (1 row) SELECT cast(tdigest(i / 1000.0, 25) as json) from generate_series(1,1000) s(i); tdigest ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- {"flags": 1, "count": 1000, "compression": 25, "centroids": 18, "mean": [0.001, 0.002, 0.003, 0.0055, 0.012, 0.0265, 0.0575, 0.115, 0.232, 0.472, 0.727, 0.8775, 0.949, 0.9765, 0.9915, 0.997, 0.999, 1], "count": [1, 1, 1, 4, 9, 20, 42, 73, 161, 319, 191, 110, 33, 22, 8, 3, 1, 1]} (1 row) SELECT cast(tdigest(i / 1000.0, 100) as json) from generate_series(1,1000) s(i); tdigest ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- {"flags": 1, "count": 1000, "compression": 100, "centroids": 40, "mean": [0.001, 0.002, 0.003, 0.004, 0.005, 0.006, 0.0075, 0.01, 0.0135, 0.018, 0.0245, 0.034, 0.047, 0.065, 0.09, 0.1245, 0.171, 0.2315, 0.3075, 0.3985, 0.501, 0.6035, 0.6945, 0.7705, 0.831, 0.8775, 0.912, 0.937, 0.955, 0.968, 0.9775, 0.984, 0.9885, 0.992, 0.9945, 0.996, 0.997, 0.998, 0.999, 1], "count": [1, 1, 1, 1, 1, 1, 2, 3, 4, 5, 8, 11, 15, 21, 29, 40, 53, 68, 84, 98, 107, 98, 84, 68, 53, 40, 29, 21, 15, 11, 8, 5, 4, 3, 2, 1, 1, 1, 1, 1]} (1 row) -- test casting to double precision array SELECT array_agg(round(v::numeric,3)) FROM ( SELECT unnest(cast(tdigest(i / 1000.0, 10) as double precision[])) AS v from generate_series(1,1000) s(i) ) foo; array_agg ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ {1.000,1000.000,10.000,13.000,0.001,1.000,0.002,1.000,0.005,4.000,0.013,13.000,0.041,42.000,0.135,147.000,0.464,511.000,0.793,147.000,0.916,99.000,0.980,28.000,0.996,5.000,0.999,1.000,1.000,1.000} (1 row) SELECT array_agg(round(v::numeric,3)) FROM ( SELECT unnest(cast(tdigest(i / 1000.0, 25) as double precision[])) AS v from generate_series(1,1000) s(i) ) foo; array_agg --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- {1.000,1000.000,25.000,18.000,0.001,1.000,0.002,1.000,0.003,1.000,0.006,4.000,0.012,9.000,0.027,20.000,0.058,42.000,0.115,73.000,0.232,161.000,0.472,319.000,0.727,191.000,0.878,110.000,0.949,33.000,0.977,22.000,0.992,8.000,0.997,3.000,0.999,1.000,1.000,1.000} (1 row) SELECT array_agg(round(v::numeric,3)) FROM ( SELECT unnest(cast(tdigest(i / 1000.0, 100) as double precision[])) AS v from generate_series(1,1000) s(i) ) foo; array_agg ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- {1.000,1000.000,100.000,40.000,0.001,1.000,0.002,1.000,0.003,1.000,0.004,1.000,0.005,1.000,0.006,1.000,0.008,2.000,0.010,3.000,0.014,4.000,0.018,5.000,0.025,8.000,0.034,11.000,0.047,15.000,0.065,21.000,0.090,29.000,0.125,40.000,0.171,53.000,0.232,68.000,0.308,84.000,0.399,98.000,0.501,107.000,0.604,98.000,0.695,84.000,0.771,68.000,0.831,53.000,0.878,40.000,0.912,29.000,0.937,21.000,0.955,15.000,0.968,11.000,0.978,8.000,0.984,5.000,0.989,4.000,0.992,3.000,0.995,2.000,0.996,1.000,0.997,1.000,0.998,1.000,0.999,1.000,1.000,1.000} (1 row) tdigest-1.4.1/test/expected/conversions.out000066400000000000000000000036471450426374500210650ustar00rootroot00000000000000-- test input function, and conversion from old to new format SELECT 'flags 0 count 20 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (7000.000000, 2) (26000.000000, 4) (84000.000000, 7) (51000.000000, 3) (19000.000000, 1) (20000.000000, 1)'::tdigest; tdigest ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- flags 1 count 20 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (3500.000000, 2) (6500.000000, 4) (12000.000000, 7) (17000.000000, 3) (19000.000000, 1) (20000.000000, 1) (1 row) -- test input of invalid data -- negative count SELECT 'flags 0 count -20 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (7000.000000, 2) (26000.000000, 4) (84000.000000, 7) (51000.000000, 3) (19000.000000, 1) (20000.000000, 1)'::tdigest; ERROR: count value for the t-digest must be positive LINE 1: SELECT 'flags 0 count -20 compression 10 centroids 8 (1000.0... ^ -- mismatching count SELECT 'flags 0 count 21 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (7000.000000, 2) (26000.000000, 4) (84000.000000, 7) (51000.000000, 3) (19000.000000, 1) (20000.000000, 1)'::tdigest; ERROR: total count does not match the data (20 != 21) LINE 1: SELECT 'flags 0 count 21 compression 10 centroids 8 (1000.00... ^ -- incorrectly sorted centroids SELECT 'flags 0 count 20 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (1000.000000, 2) (26000.000000, 4) (84000.000000, 7) (51000.000000, 3) (19000.000000, 1) (20000.000000, 1)'::tdigest; ERROR: centroids not sorted by mean LINE 1: SELECT 'flags 0 count 20 compression 10 centroids 8 (1000.00... ^ tdigest-1.4.1/test/expected/incremental.out000066400000000000000000000061701450426374500210100ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- test incremental API (adding values one by one) CREATE TABLE t (d tdigest); INSERT INTO t VALUES (NULL); -- check this produces the same result building the tdigest at once, but we -- need to be careful about feeding the data in the same order, and we must -- not compactify the t-digest after each increment DO LANGUAGE plpgsql $$ DECLARE r RECORD; BEGIN FOR r IN (SELECT i FROM generate_series(1,1000) s(i) ORDER BY md5(i::text)) LOOP UPDATE t SET d = tdigest_add(d, r.i, 100, false); END LOOP; END$$; -- compare the results, but do force a compaction of the incremental result WITH x AS (SELECT i FROM generate_series(1,1000) s(i) ORDER BY md5(i::text)) SELECT (SELECT tdigest(d)::text FROM t) = (SELECT tdigest(x.i, 100)::text FROM x) AS match; match ------- t (1 row) -- now try the same thing with bulk incremental update (using arrays) TRUNCATE t; INSERT INTO t VALUES (NULL); DO LANGUAGE plpgsql $$ DECLARE r RECORD; BEGIN FOR r IN (SELECT a, array_agg(i::double precision) AS v FROM (SELECT mod(i,5) AS a, i FROM generate_series(1,1000) s(i) ORDER BY mod(i,5), md5(i::text)) foo GROUP BY a ORDER BY a) LOOP UPDATE t SET d = tdigest_add(d, r.v, 100, false); END LOOP; END$$; -- compare the results, but do force a compaction of the incremental result WITH x AS (SELECT mod(i,5) AS a, i::double precision AS d FROM generate_series(1,1000) s(i) ORDER BY mod(i,5), i) SELECT (SELECT tdigest(d)::text FROM t) = (SELECT tdigest(x.d, 100)::text FROM x); ?column? ---------- t (1 row) -- now try the same thing with bulk incremental update (using t-digests) TRUNCATE t; INSERT INTO t VALUES (NULL); DO LANGUAGE plpgsql $$ DECLARE r RECORD; BEGIN FOR r IN (SELECT a, tdigest(i,100) AS d FROM (SELECT mod(i,5) AS a, i FROM generate_series(1,1000) s(i) ORDER BY mod(i,5), md5(i::text)) foo GROUP BY a ORDER BY a) LOOP UPDATE t SET d = tdigest_union(d, r.d, false); END LOOP; END$$; -- compare the results, but do force a compaction of the incremental result WITH x AS (SELECT a, tdigest(i,100) AS d FROM (SELECT mod(i,5) AS a, i FROM generate_series(1,1000) s(i) ORDER BY mod(i,5), md5(i::text)) foo GROUP BY a ORDER BY a) SELECT (SELECT tdigest(d)::text FROM t) = (SELECT tdigest(x.d)::text FROM x); ?column? ---------- t (1 row) tdigest-1.4.1/test/expected/parallel_query.out000066400000000000000000000232511450426374500215270ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- test parallel query DROP TABLE t; CREATE TABLE t (v double precision, c int, d int); INSERT INTO t SELECT 1000 * random(), 1 + mod(i,7), mod(i,113) FROM generate_series(1,100000) s(i); ANALYZE t; CREATE TABLE t2 (d tdigest); INSERT INTO t2 SELECT tdigest(v, 100) FROM t GROUP BY d; ANALYZE t2; -- individual values EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, 100, 0.95) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo -> Finalize Aggregate InitPlan 1 (returns $1) -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (11 rows) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, 100, 0.95) AS b FROM t) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(v, 100, 950) AS b FROM t) foo; QUERY PLAN ------------------------------------------------ Subquery Scan on foo -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (6 rows) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(v, 100, 950) AS b FROM t) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(d, 0.95) AS b FROM t2) foo; QUERY PLAN -------------------------------------------------- Subquery Scan on foo -> Finalize Aggregate InitPlan 1 (returns $1) -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (11 rows) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(d, 0.95) AS b FROM t2) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(d, 950) AS b FROM t2) foo; QUERY PLAN ------------------------------------------------- Subquery Scan on foo -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (6 rows) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(d, 950) AS b FROM t2) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) -- array of percentiles / values EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo -> ProjectSet InitPlan 1 (returns $1) -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (12 rows) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(v, 100, ARRAY[950, 990])) AS b FROM t) foo; QUERY PLAN -------------------------------------------------------------- Subquery Scan on foo -> ProjectSet InitPlan 2 (returns $2) -> Aggregate -> Function Scan on unnest f SubPlan 1 -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (15 rows) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(v, 100, ARRAY[950, 990])) AS b FROM t) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(d, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t2) foo; QUERY PLAN ------------------------------------------------------- Subquery Scan on foo -> ProjectSet InitPlan 1 (returns $1) -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (12 rows) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(d, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t2) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(d, ARRAY[950, 990])) AS b FROM t2) foo; QUERY PLAN ---------------------------------------------------------- Subquery Scan on foo -> ProjectSet InitPlan 2 (returns $2) -> Aggregate -> Function Scan on unnest f SubPlan 1 -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (15 rows) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(d, ARRAY[950, 990])) AS b FROM t2) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) tdigest-1.4.1/test/expected/parallel_query_1.out000066400000000000000000000232711450426374500217510ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- test parallel query DROP TABLE t; CREATE TABLE t (v double precision, c int, d int); INSERT INTO t SELECT 1000 * random(), 1 + mod(i,7), mod(i,113) FROM generate_series(1,100000) s(i); ANALYZE t; CREATE TABLE t2 (d tdigest); INSERT INTO t2 SELECT tdigest(v, 100) FROM t GROUP BY d; ANALYZE t2; -- individual values EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, 100, 0.95) AS b FROM t) foo; QUERY PLAN ------------------------------------------------ Subquery Scan on foo CTE x -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> Finalize Aggregate InitPlan 2 (returns $2) -> CTE Scan on x -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (13 rows) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, 100, 0.95) AS b FROM t) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(v, 100, 950) AS b FROM t) foo; QUERY PLAN ------------------------------------------------ Subquery Scan on foo -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (6 rows) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(v, 100, 950) AS b FROM t) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(d, 0.95) AS b FROM t2) foo; QUERY PLAN ------------------------------------------------- Subquery Scan on foo CTE x -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> Finalize Aggregate InitPlan 2 (returns $2) -> CTE Scan on x -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (13 rows) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(d, 0.95) AS b FROM t2) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(d, 950) AS b FROM t2) foo; QUERY PLAN ------------------------------------------------- Subquery Scan on foo -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (6 rows) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(d, 950) AS b FROM t2) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) -- array of percentiles / values EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE x -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> ProjectSet InitPlan 2 (returns $2) -> CTE Scan on x -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (14 rows) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(v, 100, ARRAY[950, 990])) AS b FROM t) foo; QUERY PLAN -------------------------------------------------------- Subquery Scan on foo CTE x -> Aggregate -> Function Scan on unnest f SubPlan 1 -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> ProjectSet InitPlan 3 (returns $3) -> CTE Scan on x -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (17 rows) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(v, 100, ARRAY[950, 990])) AS b FROM t) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(d, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t2) foo; QUERY PLAN ------------------------------------------------------- Subquery Scan on foo CTE x -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> ProjectSet InitPlan 2 (returns $2) -> CTE Scan on x -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (14 rows) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(d, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t2) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(d, ARRAY[950, 990])) AS b FROM t2) foo; QUERY PLAN ------------------------------------------------------- Subquery Scan on foo CTE x -> Aggregate -> Function Scan on unnest f SubPlan 1 -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> ProjectSet InitPlan 3 (returns $3) -> CTE Scan on x -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (17 rows) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(d, ARRAY[950, 990])) AS b FROM t2) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) tdigest-1.4.1/test/expected/parallel_query_2.out000066400000000000000000000231071450426374500217500ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- test parallel query DROP TABLE t; CREATE TABLE t (v double precision, c int, d int); INSERT INTO t SELECT 1000 * random(), 1 + mod(i,7), mod(i,113) FROM generate_series(1,100000) s(i); ANALYZE t; CREATE TABLE t2 (d tdigest); INSERT INTO t2 SELECT tdigest(v, 100) FROM t GROUP BY d; ANALYZE t2; -- individual values EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, 100, 0.95) AS b FROM t) foo; QUERY PLAN ------------------------------------------------ Subquery Scan on foo CTE x -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> Aggregate InitPlan 2 (returns $2) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t (12 rows) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, 100, 0.95) AS b FROM t) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(v, 100, 950) AS b FROM t) foo; QUERY PLAN ------------------------------------------------ Subquery Scan on foo -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (6 rows) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(v, 100, 950) AS b FROM t) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(d, 0.95) AS b FROM t2) foo; QUERY PLAN -------------------------------------------- Subquery Scan on foo CTE x -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> Aggregate InitPlan 2 (returns $2) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t2 (12 rows) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(d, 0.95) AS b FROM t2) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(d, 950) AS b FROM t2) foo; QUERY PLAN ------------------------------------------------- Subquery Scan on foo -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (6 rows) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(d, 950) AS b FROM t2) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) -- array of percentiles / values EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE x -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> ProjectSet InitPlan 2 (returns $2) -> CTE Scan on x -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (14 rows) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(v, 100, ARRAY[950, 990])) AS b FROM t) foo; QUERY PLAN -------------------------------------------------------- Subquery Scan on foo CTE x -> Aggregate -> Function Scan on unnest f SubPlan 1 -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> ProjectSet InitPlan 3 (returns $3) -> CTE Scan on x -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (17 rows) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(v, 100, ARRAY[950, 990])) AS b FROM t) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(d, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t2) foo; QUERY PLAN ------------------------------------------------------- Subquery Scan on foo CTE x -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> ProjectSet InitPlan 2 (returns $2) -> CTE Scan on x -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (14 rows) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(d, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t2) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(d, ARRAY[950, 990])) AS b FROM t2) foo; QUERY PLAN ------------------------------------------------------- Subquery Scan on foo CTE x -> Aggregate -> Function Scan on unnest f SubPlan 1 -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> ProjectSet InitPlan 3 (returns $3) -> CTE Scan on x -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (17 rows) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(d, ARRAY[950, 990])) AS b FROM t2) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) tdigest-1.4.1/test/expected/parallel_query_3.out000066400000000000000000000222171450426374500217520ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- test parallel query DROP TABLE t; CREATE TABLE t (v double precision, c int, d int); INSERT INTO t SELECT 1000 * random(), 1 + mod(i,7), mod(i,113) FROM generate_series(1,100000) s(i); ANALYZE t; CREATE TABLE t2 (d tdigest); INSERT INTO t2 SELECT tdigest(v, 100) FROM t GROUP BY d; ANALYZE t2; -- individual values EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, 100, 0.95) AS b FROM t) foo; QUERY PLAN ------------------------------------------------ Subquery Scan on foo CTE x -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> Aggregate InitPlan 2 (returns $1) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t (12 rows) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, 100, 0.95) AS b FROM t) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(v, 100, 950) AS b FROM t) foo; QUERY PLAN ------------------------------------------------ Subquery Scan on foo -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (6 rows) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(v, 100, 950) AS b FROM t) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(d, 0.95) AS b FROM t2) foo; QUERY PLAN -------------------------------------------- Subquery Scan on foo CTE x -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> Aggregate InitPlan 2 (returns $1) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t2 (12 rows) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(d, 0.95) AS b FROM t2) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(d, 950) AS b FROM t2) foo; QUERY PLAN ------------------------------------------------- Subquery Scan on foo -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (6 rows) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(d, 950) AS b FROM t2) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) -- array of percentiles / values EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------ Subquery Scan on foo CTE x -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> Aggregate InitPlan 2 (returns $1) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t (12 rows) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(v, 100, ARRAY[950, 990])) AS b FROM t) foo; QUERY PLAN -------------------------------------------------------- Subquery Scan on foo CTE x -> Aggregate -> Function Scan on unnest f SubPlan 1 -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> Aggregate InitPlan 3 (returns $2) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t (15 rows) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(v, 100, ARRAY[950, 990])) AS b FROM t) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(d, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t2) foo; QUERY PLAN -------------------------------------------- Subquery Scan on foo CTE x -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> Aggregate InitPlan 2 (returns $1) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t2 (12 rows) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(d, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t2) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(d, ARRAY[950, 990])) AS b FROM t2) foo; QUERY PLAN ---------------------------------------------------- Subquery Scan on foo CTE x -> Aggregate -> Function Scan on unnest f SubPlan 1 -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> Aggregate InitPlan 3 (returns $2) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t2 (15 rows) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(d, ARRAY[950, 990])) AS b FROM t2) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) tdigest-1.4.1/test/expected/tmp.out000066400000000000000000002250701450426374500173110ustar00rootroot00000000000000\set ECHO none -- SRF function implementing a simple deterministict PRNG CREATE OR REPLACE FUNCTION prng(nrows int, seed int = 23982, p1 bigint = 16807, p2 bigint = 0, n bigint = 2147483647) RETURNS SETOF double precision AS $$ DECLARE val INT := seed; BEGIN FOR i IN 1..nrows LOOP val := (val * p1 + p2) % n; RETURN NEXT (val::double precision / n); END LOOP; RETURN; END; $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION random_normal(nrows int, mean double precision = 0.5, stddev double precision = 0.1, minval double precision = 0.0, maxval double precision = 1.0, seed int = 23982, p1 bigint = 16807, p2 bigint = 0, n bigint = 2147483647) RETURNS SETOF double precision AS $$ DECLARE v BIGINT := seed; x DOUBLE PRECISION; y DOUBLE PRECISION; s DOUBLE PRECISION; r INT := nrows; BEGIN WHILE true LOOP -- random x v := (v * p1 + p2) % n; x := 2 * v / n::double precision - 1.0; -- random y v := (v * p1 + p2) % n; y := 2 * v / n::double precision - 1.0; s := x^2 + y^2; IF s != 0.0 AND s < 1.0 THEN s = sqrt(-2 * ln(s) / s); x := mean + stddev * s * x; IF x >= minval AND x <= maxval THEN RETURN NEXT x; r := r - 1; END IF; EXIT WHEN r = 0; y := mean + stddev * s * y; IF y >= minval AND y <= maxval THEN RETURN NEXT y; r := r - 1; END IF; EXIT WHEN r = 0; END IF; END LOOP; END; $$ LANGUAGE plpgsql; DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; ----------------------------------------------------------- -- nice data set with ordered (asc) / evenly-spaced data -- ----------------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ------------------------------------------------------------ -- nice data set with ordered (desc) / evenly-spaced data -- ------------------------------------------------------------ -- 10 centroids (tiny) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ---------------------------------------------------- -- nice data set with random / evenly-spaced data -- ---------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ---------------------------------------------- -- nice data set with random data (uniform) -- ---------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT x FROM prng(100000) s(x)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT x FROM prng(100000) s(x)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT x FROM prng(100000) s(x)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT x FROM prng(100000) s(x)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT x FROM prng(100000) s(x)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT x FROM prng(100000) s(x)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -------------------------------------------------- -- nice data set with random data (skewed sqrt) -- -------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ------------------------------------------------------- -- nice data set with random data (skewed sqrt+sqrt) -- ------------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ------------------------------------------------- -- nice data set with random data (skewed pow) -- ------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.005, -- arbitrary threshold of 0.5% (CASE WHEN abs(a - b) < 0.005 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ----------------------------------------------------- -- nice data set with random data (skewed pow+pow) -- ----------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) ---------------------------------------------------------- -- nice data set with random data (normal distribution) -- ---------------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)) SELECT p, abs(a - b) < 0.025, -- arbitrary threshold of 2.5% (CASE WHEN abs(a - b) < 0.025 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; p | a | b ---+---+--- (0 rows) -- some basic tests to verify transforming from and to text work -- 10 centroids (tiny) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)), intermediate AS (SELECT tdigest(x, 10)::text AS intermediate_x FROM data), tdigest_parsed AS (SELECT tdigest_percentile(intermediate_x::tdigest, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS a FROM intermediate), pg_percentile AS (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) AS b FROM data) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM tdigest_parsed, pg_percentile ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- verify we can store tdigest in a summary table CREATE TABLE intermediate_tdigest (grouping int, summary tdigest); WITH data AS (SELECT row_number() OVER () AS i, pow(z, 4) AS x FROM random_normal(100000) s(z)) INSERT INTO intermediate_tdigest SELECT i % 10 AS grouping, tdigest(x, 100) AS summary FROM data GROUP BY i % 10; WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)), intermediate AS (SELECT tdigest_percentile(summary, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS a FROM intermediate_tdigest), pg_percentile AS (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) AS b FROM data) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM intermediate, pg_percentile ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- verify 'extreme' percentiles for the dataset would not read out of bounds on the centroids WITH data AS (SELECT x FROM generate_series(1,10) AS x) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% given the small dataset and extreme percentiles it is not very accurate (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; p | ?column? | err ------+----------+----- 0.01 | t | 0.99 | t | (2 rows) -- check that the computed percentiles are perfectly correlated (don't decrease for higher p values) -- first test on a tiny t-digest with all centroids having count = 1 WITH -- percentiles to compute perc AS (SELECT array_agg((i / 100.0)::double precision) AS percentiles FROM generate_series(1,99) s(i)), -- input data (just 15 points) input_data AS (select i::double precision AS val FROM generate_series(1,15) s(i)) SELECT * FROM ( SELECT p, v AS v1, lag(v, 1) OVER (ORDER BY p) v2 FROM ( SELECT unnest(perc.percentiles) p, unnest(tdigest_percentile(input_data.val, 100, perc.percentiles)) v FROM perc, input_data GROUP BY perc.percentiles ) foo ) bar where v2 > v1; p | v1 | v2 ---+----+---- (0 rows) -- test casting to json SELECT cast(tdigest(i / 1000.0, 10) as json) from generate_series(1,1000) s(i); tdigest --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- {"flags": 1, "count": 1000, "compression": 10, "centroids": 13, "mean": [0.001, 0.002, 0.0045, 0.013, 0.0405, 0.135, 0.464, 0.793, 0.916, 0.9795, 0.996, 0.999, 1], "count": [1, 1, 4, 13, 42, 147, 511, 147, 99, 28, 5, 1, 1]} (1 row) SELECT cast(tdigest(i / 1000.0, 25) as json) from generate_series(1,1000) s(i); tdigest ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- {"flags": 1, "count": 1000, "compression": 25, "centroids": 18, "mean": [0.001, 0.002, 0.003, 0.0055, 0.012, 0.0265, 0.0575, 0.115, 0.232, 0.472, 0.727, 0.8775, 0.949, 0.9765, 0.9915, 0.997, 0.999, 1], "count": [1, 1, 1, 4, 9, 20, 42, 73, 161, 319, 191, 110, 33, 22, 8, 3, 1, 1]} (1 row) SELECT cast(tdigest(i / 1000.0, 100) as json) from generate_series(1,1000) s(i); tdigest ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- {"flags": 1, "count": 1000, "compression": 100, "centroids": 40, "mean": [0.001, 0.002, 0.003, 0.004, 0.005, 0.006, 0.0075, 0.01, 0.0135, 0.018, 0.0245, 0.034, 0.047, 0.065, 0.09, 0.1245, 0.171, 0.2315, 0.3075, 0.3985, 0.501, 0.6035, 0.6945, 0.7705, 0.831, 0.8775, 0.912, 0.937, 0.955, 0.968, 0.9775, 0.984, 0.9885, 0.992, 0.9945, 0.996, 0.997, 0.998, 0.999, 1], "count": [1, 1, 1, 1, 1, 1, 2, 3, 4, 5, 8, 11, 15, 21, 29, 40, 53, 68, 84, 98, 107, 98, 84, 68, 53, 40, 29, 21, 15, 11, 8, 5, 4, 3, 2, 1, 1, 1, 1, 1]} (1 row) -- test casting to double precision array SELECT array_agg(round(v::numeric,3)) FROM ( SELECT unnest(cast(tdigest(i / 1000.0, 10) as double precision[])) AS v from generate_series(1,1000) s(i) ) foo; array_agg ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ {1.000,1000.000,10.000,13.000,0.001,1.000,0.002,1.000,0.005,4.000,0.013,13.000,0.041,42.000,0.135,147.000,0.464,511.000,0.793,147.000,0.916,99.000,0.980,28.000,0.996,5.000,0.999,1.000,1.000,1.000} (1 row) SELECT array_agg(round(v::numeric,3)) FROM ( SELECT unnest(cast(tdigest(i / 1000.0, 25) as double precision[])) AS v from generate_series(1,1000) s(i) ) foo; array_agg --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- {1.000,1000.000,25.000,18.000,0.001,1.000,0.002,1.000,0.003,1.000,0.006,4.000,0.012,9.000,0.027,20.000,0.058,42.000,0.115,73.000,0.232,161.000,0.472,319.000,0.727,191.000,0.878,110.000,0.949,33.000,0.977,22.000,0.992,8.000,0.997,3.000,0.999,1.000,1.000,1.000} (1 row) SELECT array_agg(round(v::numeric,3)) FROM ( SELECT unnest(cast(tdigest(i / 1000.0, 100) as double precision[])) AS v from generate_series(1,1000) s(i) ) foo; array_agg ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- {1.000,1000.000,100.000,40.000,0.001,1.000,0.002,1.000,0.003,1.000,0.004,1.000,0.005,1.000,0.006,1.000,0.008,2.000,0.010,3.000,0.014,4.000,0.018,5.000,0.025,8.000,0.034,11.000,0.047,15.000,0.065,21.000,0.090,29.000,0.125,40.000,0.171,53.000,0.232,68.000,0.308,84.000,0.399,98.000,0.501,107.000,0.604,98.000,0.695,84.000,0.771,68.000,0.831,53.000,0.878,40.000,0.912,29.000,0.937,21.000,0.955,15.000,0.968,11.000,0.978,8.000,0.984,5.000,0.989,4.000,0.992,3.000,0.995,2.000,0.996,1.000,0.997,1.000,0.998,1.000,0.999,1.000,1.000,1.000} (1 row) -- API select tdigest_percentile(value, count, 100, 0.95) from (values (47325940488,1), (15457695432,2), (6889790700,3), (4188763788,4), (2882932224,5), (2114815860,6), (1615194324,7), (2342114568,9), (1626471924,11), (1660755408,14), (1143728292,17), (1082582424,21), (911488284,26), (728863908,32), (654898692,40), (530198076,50), (417883440,62), (341452344,77), (274579584,95), (231921120,118), (184091820,146), (152469828,181), (125634972,224), (107059704,278), (88746120,345), (73135668,428), (61035756,531), (50683320,658), (42331824,816), (35234400,1012), (29341356,1255), (24290928,1556), (20284668,1929), (17215908,2391), (14737488,2964), (12692772,3674), (11220732,4555), (9787584,5647), (8148420,7000), (6918612,8678), (6015000,10758), (5480316,13336), (5443356,16532), (4535616,20494), (3962316,25406), (3914484,31495), (3828108,39043), (3583536,48400), (4104120,60000), (166024740,2147483647)) foo (count, value); tdigest_percentile -------------------- 30.3586183216119 (1 row) ---------------------------------------------- -- nice data set with random data (uniform) -- ---------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- test incremental API (adding values one by one) CREATE TABLE t (d tdigest); INSERT INTO t VALUES (NULL); -- check this produces the same result building the tdigest at once, but we -- need to be careful about feeding the data in the same order, and we must -- not compactify the t-digest after each increment DO LANGUAGE plpgsql $$ DECLARE r RECORD; BEGIN FOR r IN (SELECT i FROM generate_series(1,1000) s(i) ORDER BY md5(i::text)) LOOP UPDATE t SET d = tdigest_add(d, r.i, 100, false); END LOOP; END$$; -- compare the results, but do force a compaction of the incremental result WITH x AS (SELECT i FROM generate_series(1,1000) s(i) ORDER BY md5(i::text)) SELECT (SELECT tdigest(d)::text FROM t) = (SELECT tdigest(x.i, 100)::text FROM x) AS match; match ------- t (1 row) -- now try the same thing with bulk incremental update (using arrays) TRUNCATE t; INSERT INTO t VALUES (NULL); DO LANGUAGE plpgsql $$ DECLARE r RECORD; BEGIN FOR r IN (SELECT a, array_agg(i::double precision) AS v FROM (SELECT mod(i,5) AS a, i FROM generate_series(1,1000) s(i) ORDER BY mod(i,5), md5(i::text)) foo GROUP BY a ORDER BY a) LOOP UPDATE t SET d = tdigest_add(d, r.v, 100, false); END LOOP; END$$; -- compare the results, but do force a compaction of the incremental result WITH x AS (SELECT mod(i,5) AS a, i::double precision AS d FROM generate_series(1,1000) s(i) ORDER BY mod(i,5), i) SELECT (SELECT tdigest(d)::text FROM t) = (SELECT tdigest(x.d, 100)::text FROM x); ?column? ---------- t (1 row) -- now try the same thing with bulk incremental update (using t-digests) TRUNCATE t; INSERT INTO t VALUES (NULL); DO LANGUAGE plpgsql $$ DECLARE r RECORD; BEGIN FOR r IN (SELECT a, tdigest(i,100) AS d FROM (SELECT mod(i,5) AS a, i FROM generate_series(1,1000) s(i) ORDER BY mod(i,5), md5(i::text)) foo GROUP BY a ORDER BY a) LOOP UPDATE t SET d = tdigest_union(d, r.d, false); END LOOP; END$$; -- compare the results, but do force a compaction of the incremental result WITH x AS (SELECT a, tdigest(i,100) AS d FROM (SELECT mod(i,5) AS a, i FROM generate_series(1,1000) s(i) ORDER BY mod(i,5), md5(i::text)) foo GROUP BY a ORDER BY a) SELECT (SELECT tdigest(d)::text FROM t) = (SELECT tdigest(x.d)::text FROM x); ?column? ---------- t (1 row) -- test parallel query DROP TABLE t; CREATE TABLE t (v double precision, c int, d int); INSERT INTO t SELECT 1000 * random(), 1 + mod(i,7), mod(i,113) FROM generate_series(1,100000) s(i); ANALYZE t; CREATE TABLE t2 (d tdigest); INSERT INTO t2 SELECT tdigest(v, 100) FROM t GROUP BY d; ANALYZE t2; -- individual values EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, 100, 0.95) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo -> Finalize Aggregate InitPlan 1 (returns $1) -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (11 rows) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, 100, 0.95) AS b FROM t) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(v, 100, 950) AS b FROM t) foo; QUERY PLAN ------------------------------------------------ Subquery Scan on foo -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (6 rows) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(v, 100, 950) AS b FROM t) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(d, 0.95) AS b FROM t2) foo; QUERY PLAN -------------------------------------------------- Subquery Scan on foo -> Finalize Aggregate InitPlan 1 (returns $1) -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (11 rows) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(d, 0.95) AS b FROM t2) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(d, 950) AS b FROM t2) foo; QUERY PLAN ------------------------------------------------- Subquery Scan on foo -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (6 rows) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(d, 950) AS b FROM t2) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) -- array of percentiles / values EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo -> ProjectSet InitPlan 1 (returns $1) -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (12 rows) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(v, 100, ARRAY[950, 990])) AS b FROM t) foo; QUERY PLAN -------------------------------------------------------------- Subquery Scan on foo -> ProjectSet InitPlan 2 (returns $2) -> Aggregate -> Function Scan on unnest f SubPlan 1 -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t t_1 -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (15 rows) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(v, 100, ARRAY[950, 990])) AS b FROM t) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(d, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t2) foo; QUERY PLAN ------------------------------------------------------- Subquery Scan on foo -> ProjectSet InitPlan 1 (returns $1) -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (12 rows) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(d, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t2) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(d, ARRAY[950, 990])) AS b FROM t2) foo; QUERY PLAN ---------------------------------------------------------- Subquery Scan on foo -> ProjectSet InitPlan 2 (returns $2) -> Aggregate -> Function Scan on unnest f SubPlan 1 -> Aggregate -> Gather Workers Planned: 2 -> Parallel Seq Scan on t -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t2 (15 rows) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(d, ARRAY[950, 990])) AS b FROM t2) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) -- API EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, c, 100, 0.95) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------------------ Subquery Scan on foo -> Finalize Aggregate InitPlan 1 (returns $2) -> Aggregate -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (13 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, c, 100, 0.95) AS b FROM t) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percent_rank(950) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile_of(v, c, 100, 950) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------------------ Subquery Scan on foo -> Finalize Aggregate InitPlan 1 (returns $2) -> Aggregate -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (13 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percent_rank(950) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile_of(v, c, 100, 950) AS b FROM t) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) -- array of percentiles / values EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, c, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------------------ Subquery Scan on foo -> ProjectSet InitPlan 1 (returns $2) -> Aggregate -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (14 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, c, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) AS p FROM d)) p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((select x.p from x)) AS a, unnest(tdigest_percentile_of(v, c, 100, ARRAY[950, 990])) AS b FROM t) foo; QUERY PLAN -------------------------------------------------------------------------- Subquery Scan on foo -> ProjectSet InitPlan 2 (returns $3) -> Aggregate -> Function Scan on unnest f SubPlan 1 -> Aggregate -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (17 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) AS p FROM d)) p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((select x.p from x)) AS a, unnest(tdigest_percentile_of(v, c, 100, ARRAY[950, 990])) AS b FROM t) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) -- test input function, and conversion from old to new format SELECT 'flags 0 count 20 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (7000.000000, 2) (26000.000000, 4) (84000.000000, 7) (51000.000000, 3) (19000.000000, 1) (20000.000000, 1)'::tdigest; tdigest ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- flags 1 count 20 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (3500.000000, 2) (6500.000000, 4) (12000.000000, 7) (17000.000000, 3) (19000.000000, 1) (20000.000000, 1) (1 row) -- test input of invalid data -- negative count SELECT 'flags 0 count -20 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (7000.000000, 2) (26000.000000, 4) (84000.000000, 7) (51000.000000, 3) (19000.000000, 1) (20000.000000, 1)'::tdigest; ERROR: count value for the t-digest must be positive LINE 1: SELECT 'flags 0 count -20 compression 10 centroids 8 (1000.0... ^ -- mismatching count SELECT 'flags 0 count 21 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (7000.000000, 2) (26000.000000, 4) (84000.000000, 7) (51000.000000, 3) (19000.000000, 1) (20000.000000, 1)'::tdigest; ERROR: total count does not match the data (20 != 21) LINE 1: SELECT 'flags 0 count 21 compression 10 centroids 8 (1000.00... ^ -- incorrectly sorted centroids SELECT 'flags 0 count 20 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (1000.000000, 2) (26000.000000, 4) (84000.000000, 7) (51000.000000, 3) (19000.000000, 1) (20000.000000, 1)'::tdigest; ERROR: centroids not sorted by mean LINE 1: SELECT 'flags 0 count 20 compression 10 centroids 8 (1000.00... ^ -- check trimmed mean (from raw data) -- we compare the result to a range, to deal with the randomness WITH data AS (SELECT random() AS r FROM generate_series(1,10000) AS x) SELECT tdigest_avg(data.r, 50, 0.1, 0.9) between 0.45 and 0.55 AS mean_10_90, tdigest_avg(data.r, 50, 0.25, 0.75) between 0.45 and 0.55 AS mean_25_75, tdigest_avg(data.r, 50, 0.0, 0.5) between 0.2 and 0.3 AS mean_0_50, tdigest_avg(data.r, 50, 0.5, 1.0) between 0.7 and 0.8 AS mean_50_100 FROM data; mean_10_90 | mean_25_75 | mean_0_50 | mean_50_100 ------------+------------+-----------+------------- t | t | t | t (1 row) WITH data AS (SELECT random() AS r, (1 + (3 * random())::int) AS c FROM generate_series(1,10000) AS x) SELECT tdigest_avg(data.r, data.c, 100, 0.1, 0.9) between 0.45 and 0.55 AS mean_10_90, tdigest_avg(data.r, data.c, 100, 0.25, 0.75) between 0.45 and 0.55 AS mean_25_75, tdigest_avg(data.r, data.c, 100, 0.0, 0.5) between 0.2 and 0.3 AS mean_0_50, tdigest_avg(data.r, data.c, 100, 0.5, 1.0) between 0.7 and 0.8 AS mean_50_100 FROM data; mean_10_90 | mean_25_75 | mean_0_50 | mean_50_100 ------------+------------+-----------+------------- t | t | t | t (1 row) -- check trimmed mean (from pracalculated tdigest) -- we compare the result to a range, to deal with the randomness WITH data AS (SELECT tdigest(random(), 50) AS d FROM generate_series(1,10000) AS x) SELECT tdigest_avg(data.d, 0.1, 0.9) between 0.45 and 0.55 AS mean_10_90, tdigest_avg(data.d, 0.25, 0.75) between 0.45 and 0.55 AS mean_25_75, tdigest_avg(data.d, 0.0, 0.5) between 0.2 and 0.3 AS mean_0_50, tdigest_avg(data.d, 0.5, 1.0) between 0.7 and 0.8 AS mean_50_100 FROM data; mean_10_90 | mean_25_75 | mean_0_50 | mean_50_100 ------------+------------+-----------+------------- t | t | t | t (1 row) -- check trimmed sum (from raw data) -- we compare the result to a range, to deal with the randomness WITH data AS (SELECT random() AS r FROM generate_series(1,10000) AS x) SELECT tdigest_sum(data.r, 50, 0.1, 0.9) between 8000 * 0.45 and 8000 * 0.55 AS sum_10_90, tdigest_sum(data.r, 50, 0.25, 0.75) between 5000 * 0.45 and 5000 * 0.55 AS sum_25_75, tdigest_sum(data.r, 50, 0.0, 0.5) between 5000 * 0.2 and 5000 * 0.3 AS sum_0_50, tdigest_sum(data.r, 50, 0.5, 1.0) between 5000 * 0.7 and 5000 * 0.8 AS sum_50_100 FROM data; sum_10_90 | sum_25_75 | sum_0_50 | sum_50_100 -----------+-----------+----------+------------ t | t | t | t (1 row) WITH data AS (SELECT random() AS r, (1 + (3 * random())::int) AS c FROM generate_series(1,10000) AS x) SELECT tdigest_sum(data.r, data.c, 100, 0.1, 0.9) between 20000 * 0.45 and 20000 * 0.55 AS sum_10_90, tdigest_sum(data.r, data.c, 100, 0.25, 0.75) between 12500 * 0.45 and 12500 * 0.55 AS sum_25_75, tdigest_sum(data.r, data.c, 100, 0.0, 0.5) between 12500 * 0.2 and 12500 * 0.3 AS sum_0_50, tdigest_sum(data.r, data.c, 100, 0.5, 1.0) between 12500 * 0.7 and 12500 * 0.8 AS sum_50_100 FROM data; sum_10_90 | sum_25_75 | sum_0_50 | sum_50_100 -----------+-----------+----------+------------ t | t | t | t (1 row) -- check trimmed sum (from pracalculated tdigest) -- we compare the result to a range, to deal with the randomness WITH data AS (SELECT tdigest(random(), 50) AS d FROM generate_series(1,10000) AS x) SELECT tdigest_sum(data.d, 0.1, 0.9) between 8000 * 0.45 and 8000 * 0.55 AS sum_10_90, tdigest_sum(data.d, 0.25, 0.75) between 5000 * 0.45 and 5000 * 0.55 AS sum_25_75, tdigest_sum(data.d, 0.0, 0.5) between 5000 * 0.2 and 5000 * 0.3 AS sum_0_50, tdigest_sum(data.d, 0.5, 1.0) between 5000 * 0.7 and 5000 * 0.8 AS sum_50_100 FROM data; sum_10_90 | sum_25_75 | sum_0_50 | sum_50_100 -----------+-----------+----------+------------ t | t | t | t (1 row) WITH data AS (SELECT tdigest(random(), 50) AS d FROM generate_series(1,10000) AS x) SELECT tdigest_digest_sum(data.d, 0.05, 0.95) between 9000 * 0.45 and 9000 * 0.55 AS sum_05_95, tdigest_digest_avg(data.d, 0.05, 0.95) between 0.45 and 0.55 AS mean_05_95 FROM data; sum_05_95 | mean_05_95 -----------+------------ t | t (1 row) tdigest-1.4.1/test/expected/trimmed_aggregates.out000066400000000000000000000120731450426374500223400ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- check trimmed mean (from raw data) -- we compare the result to a range, to deal with the randomness WITH data AS (SELECT random() AS r FROM generate_series(1,10000) AS x) SELECT tdigest_avg(data.r, 50, 0.1, 0.9) between 0.45 and 0.55 AS mean_10_90, tdigest_avg(data.r, 50, 0.25, 0.75) between 0.45 and 0.55 AS mean_25_75, tdigest_avg(data.r, 50, 0.0, 0.5) between 0.2 and 0.3 AS mean_0_50, tdigest_avg(data.r, 50, 0.5, 1.0) between 0.7 and 0.8 AS mean_50_100 FROM data; mean_10_90 | mean_25_75 | mean_0_50 | mean_50_100 ------------+------------+-----------+------------- t | t | t | t (1 row) WITH data AS (SELECT random() AS r, (1 + (3 * random())::int) AS c FROM generate_series(1,10000) AS x) SELECT tdigest_avg(data.r, data.c, 100, 0.1, 0.9) between 0.45 and 0.55 AS mean_10_90, tdigest_avg(data.r, data.c, 100, 0.25, 0.75) between 0.45 and 0.55 AS mean_25_75, tdigest_avg(data.r, data.c, 100, 0.0, 0.5) between 0.2 and 0.3 AS mean_0_50, tdigest_avg(data.r, data.c, 100, 0.5, 1.0) between 0.7 and 0.8 AS mean_50_100 FROM data; mean_10_90 | mean_25_75 | mean_0_50 | mean_50_100 ------------+------------+-----------+------------- t | t | t | t (1 row) -- check trimmed mean (from pracalculated tdigest) -- we compare the result to a range, to deal with the randomness WITH data AS (SELECT tdigest(random(), 50) AS d FROM generate_series(1,10000) AS x) SELECT tdigest_avg(data.d, 0.1, 0.9) between 0.45 and 0.55 AS mean_10_90, tdigest_avg(data.d, 0.25, 0.75) between 0.45 and 0.55 AS mean_25_75, tdigest_avg(data.d, 0.0, 0.5) between 0.2 and 0.3 AS mean_0_50, tdigest_avg(data.d, 0.5, 1.0) between 0.7 and 0.8 AS mean_50_100 FROM data; mean_10_90 | mean_25_75 | mean_0_50 | mean_50_100 ------------+------------+-----------+------------- t | t | t | t (1 row) -- check trimmed sum (from raw data) -- we compare the result to a range, to deal with the randomness WITH data AS (SELECT random() AS r FROM generate_series(1,10000) AS x) SELECT tdigest_sum(data.r, 50, 0.1, 0.9) between 8000 * 0.45 and 8000 * 0.55 AS sum_10_90, tdigest_sum(data.r, 50, 0.25, 0.75) between 5000 * 0.45 and 5000 * 0.55 AS sum_25_75, tdigest_sum(data.r, 50, 0.0, 0.5) between 5000 * 0.2 and 5000 * 0.3 AS sum_0_50, tdigest_sum(data.r, 50, 0.5, 1.0) between 5000 * 0.7 and 5000 * 0.8 AS sum_50_100 FROM data; sum_10_90 | sum_25_75 | sum_0_50 | sum_50_100 -----------+-----------+----------+------------ t | t | t | t (1 row) WITH data AS (SELECT random() AS r, (1 + (3 * random())::int) AS c FROM generate_series(1,10000) AS x) SELECT tdigest_sum(data.r, data.c, 100, 0.1, 0.9) between 20000 * 0.45 and 20000 * 0.55 AS sum_10_90, tdigest_sum(data.r, data.c, 100, 0.25, 0.75) between 12500 * 0.45 and 12500 * 0.55 AS sum_25_75, tdigest_sum(data.r, data.c, 100, 0.0, 0.5) between 12500 * 0.2 and 12500 * 0.3 AS sum_0_50, tdigest_sum(data.r, data.c, 100, 0.5, 1.0) between 12500 * 0.7 and 12500 * 0.8 AS sum_50_100 FROM data; sum_10_90 | sum_25_75 | sum_0_50 | sum_50_100 -----------+-----------+----------+------------ t | t | t | t (1 row) -- check trimmed sum (from pracalculated tdigest) -- we compare the result to a range, to deal with the randomness WITH data AS (SELECT tdigest(random(), 50) AS d FROM generate_series(1,10000) AS x) SELECT tdigest_sum(data.d, 0.1, 0.9) between 8000 * 0.45 and 8000 * 0.55 AS sum_10_90, tdigest_sum(data.d, 0.25, 0.75) between 5000 * 0.45 and 5000 * 0.55 AS sum_25_75, tdigest_sum(data.d, 0.0, 0.5) between 5000 * 0.2 and 5000 * 0.3 AS sum_0_50, tdigest_sum(data.d, 0.5, 1.0) between 5000 * 0.7 and 5000 * 0.8 AS sum_50_100 FROM data; sum_10_90 | sum_25_75 | sum_0_50 | sum_50_100 -----------+-----------+----------+------------ t | t | t | t (1 row) WITH data AS (SELECT tdigest(random(), 50) AS d FROM generate_series(1,10000) AS x) SELECT tdigest_digest_sum(data.d, 0.05, 0.95) between 9000 * 0.45 and 9000 * 0.55 AS sum_05_95, tdigest_digest_avg(data.d, 0.05, 0.95) between 0.45 and 0.55 AS mean_05_95 FROM data; sum_05_95 | mean_05_95 -----------+------------ t | t (1 row) tdigest-1.4.1/test/expected/value_count_api.out000066400000000000000000000250621450426374500216650ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- API select tdigest_percentile(value, count, 100, 0.95) from (values (47325940488,1), (15457695432,2), (6889790700,3), (4188763788,4), (2882932224,5), (2114815860,6), (1615194324,7), (2342114568,9), (1626471924,11), (1660755408,14), (1143728292,17), (1082582424,21), (911488284,26), (728863908,32), (654898692,40), (530198076,50), (417883440,62), (341452344,77), (274579584,95), (231921120,118), (184091820,146), (152469828,181), (125634972,224), (107059704,278), (88746120,345), (73135668,428), (61035756,531), (50683320,658), (42331824,816), (35234400,1012), (29341356,1255), (24290928,1556), (20284668,1929), (17215908,2391), (14737488,2964), (12692772,3674), (11220732,4555), (9787584,5647), (8148420,7000), (6918612,8678), (6015000,10758), (5480316,13336), (5443356,16532), (4535616,20494), (3962316,25406), (3914484,31495), (3828108,39043), (3583536,48400), (4104120,60000), (166024740,2147483647)) foo (count, value); tdigest_percentile -------------------- 30.3586183216119 (1 row) ---------------------------------------------- -- nice data set with random data (uniform) -- ---------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- API EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, c, 100, 0.95) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------------------ Subquery Scan on foo -> Finalize Aggregate InitPlan 1 (returns $2) -> Aggregate -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (13 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, c, 100, 0.95) AS b FROM t) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percent_rank(950) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile_of(v, c, 100, 950) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------------------ Subquery Scan on foo -> Finalize Aggregate InitPlan 1 (returns $2) -> Aggregate -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (13 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percent_rank(950) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile_of(v, c, 100, 950) AS b FROM t) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) -- array of percentiles / values EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, c, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------------------ Subquery Scan on foo -> ProjectSet InitPlan 1 (returns $2) -> Aggregate -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (14 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, c, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) AS p FROM d)) p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((select x.p from x)) AS a, unnest(tdigest_percentile_of(v, c, 100, ARRAY[950, 990])) AS b FROM t) foo; QUERY PLAN -------------------------------------------------------------------------- Subquery Scan on foo -> ProjectSet InitPlan 2 (returns $3) -> Aggregate -> Function Scan on unnest f SubPlan 1 -> Aggregate -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (17 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) AS p FROM d)) p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((select x.p from x)) AS a, unnest(tdigest_percentile_of(v, c, 100, ARRAY[950, 990])) AS b FROM t) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) tdigest-1.4.1/test/expected/value_count_api_1.out000066400000000000000000000246701450426374500221110ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- API select tdigest_percentile(value, count, 100, 0.95) from (values (47325940488,1), (15457695432,2), (6889790700,3), (4188763788,4), (2882932224,5), (2114815860,6), (1615194324,7), (2342114568,9), (1626471924,11), (1660755408,14), (1143728292,17), (1082582424,21), (911488284,26), (728863908,32), (654898692,40), (530198076,50), (417883440,62), (341452344,77), (274579584,95), (231921120,118), (184091820,146), (152469828,181), (125634972,224), (107059704,278), (88746120,345), (73135668,428), (61035756,531), (50683320,658), (42331824,816), (35234400,1012), (29341356,1255), (24290928,1556), (20284668,1929), (17215908,2391), (14737488,2964), (12692772,3674), (11220732,4555), (9787584,5647), (8148420,7000), (6918612,8678), (6015000,10758), (5480316,13336), (5443356,16532), (4535616,20494), (3962316,25406), (3914484,31495), (3828108,39043), (3583536,48400), (4104120,60000), (166024740,2147483647)) foo (count, value); tdigest_percentile -------------------- 30.3586183216119 (1 row) ---------------------------------------------- -- nice data set with random data (uniform) -- ---------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- API EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, c, 100, 0.95) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE d -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series CTE x -> Aggregate -> CTE Scan on d -> Finalize Aggregate InitPlan 3 (returns $4) -> CTE Scan on x -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (17 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, c, 100, 0.95) AS b FROM t) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percent_rank(950) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile_of(v, c, 100, 950) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE d -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series CTE x -> Aggregate -> CTE Scan on d -> Finalize Aggregate InitPlan 3 (returns $4) -> CTE Scan on x -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (17 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percent_rank(950) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile_of(v, c, 100, 950) AS b FROM t) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) -- array of percentiles / values EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, c, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE d -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series CTE x -> Aggregate -> CTE Scan on d -> ProjectSet InitPlan 3 (returns $4) -> CTE Scan on x -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (18 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, c, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) AS p FROM d)) p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((select x.p from x)) AS a, unnest(tdigest_percentile_of(v, c, 100, ARRAY[950, 990])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE d -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series CTE x -> Aggregate -> Function Scan on unnest f SubPlan 2 -> Aggregate -> CTE Scan on d -> ProjectSet InitPlan 4 (returns $5) -> CTE Scan on x -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (21 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) AS p FROM d)) p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((select x.p from x)) AS a, unnest(tdigest_percentile_of(v, c, 100, ARRAY[950, 990])) AS b FROM t) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) tdigest-1.4.1/test/expected/value_count_api_2.out000066400000000000000000000245201450426374500221040ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- API select tdigest_percentile(value, count, 100, 0.95) from (values (47325940488,1), (15457695432,2), (6889790700,3), (4188763788,4), (2882932224,5), (2114815860,6), (1615194324,7), (2342114568,9), (1626471924,11), (1660755408,14), (1143728292,17), (1082582424,21), (911488284,26), (728863908,32), (654898692,40), (530198076,50), (417883440,62), (341452344,77), (274579584,95), (231921120,118), (184091820,146), (152469828,181), (125634972,224), (107059704,278), (88746120,345), (73135668,428), (61035756,531), (50683320,658), (42331824,816), (35234400,1012), (29341356,1255), (24290928,1556), (20284668,1929), (17215908,2391), (14737488,2964), (12692772,3674), (11220732,4555), (9787584,5647), (8148420,7000), (6918612,8678), (6015000,10758), (5480316,13336), (5443356,16532), (4535616,20494), (3962316,25406), (3914484,31495), (3828108,39043), (3583536,48400), (4104120,60000), (166024740,2147483647)) foo (count, value); tdigest_percentile -------------------- 30.3586183216119 (1 row) ---------------------------------------------- -- nice data set with random data (uniform) -- ---------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- API EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, c, 100, 0.95) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE d -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series CTE x -> Aggregate -> CTE Scan on d -> Aggregate InitPlan 3 (returns $4) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t (16 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, c, 100, 0.95) AS b FROM t) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percent_rank(950) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile_of(v, c, 100, 950) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE d -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series CTE x -> Aggregate -> CTE Scan on d -> Aggregate InitPlan 3 (returns $4) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t (16 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percent_rank(950) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile_of(v, c, 100, 950) AS b FROM t) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) -- array of percentiles / values EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, c, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE d -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series CTE x -> Aggregate -> CTE Scan on d -> ProjectSet InitPlan 3 (returns $4) -> CTE Scan on x -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (18 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, c, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) AS p FROM d)) p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((select x.p from x)) AS a, unnest(tdigest_percentile_of(v, c, 100, ARRAY[950, 990])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE d -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series CTE x -> Aggregate -> Function Scan on unnest f SubPlan 2 -> Aggregate -> CTE Scan on d -> ProjectSet InitPlan 4 (returns $5) -> CTE Scan on x -> Finalize Aggregate -> Gather Workers Planned: 2 -> Partial Aggregate -> Parallel Seq Scan on t (21 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) AS p FROM d)) p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((select x.p from x)) AS a, unnest(tdigest_percentile_of(v, c, 100, ARRAY[950, 990])) AS b FROM t) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) tdigest-1.4.1/test/expected/value_count_api_3.out000066400000000000000000000242101450426374500221010ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- API select tdigest_percentile(value, count, 100, 0.95) from (values (47325940488,1), (15457695432,2), (6889790700,3), (4188763788,4), (2882932224,5), (2114815860,6), (1615194324,7), (2342114568,9), (1626471924,11), (1660755408,14), (1143728292,17), (1082582424,21), (911488284,26), (728863908,32), (654898692,40), (530198076,50), (417883440,62), (341452344,77), (274579584,95), (231921120,118), (184091820,146), (152469828,181), (125634972,224), (107059704,278), (88746120,345), (73135668,428), (61035756,531), (50683320,658), (42331824,816), (35234400,1012), (29341356,1255), (24290928,1556), (20284668,1929), (17215908,2391), (14737488,2964), (12692772,3674), (11220732,4555), (9787584,5647), (8148420,7000), (6918612,8678), (6015000,10758), (5480316,13336), (5443356,16532), (4535616,20494), (3962316,25406), (3914484,31495), (3828108,39043), (3583536,48400), (4104120,60000), (166024740,2147483647)) foo (count, value); tdigest_percentile -------------------- 30.3586183216119 (1 row) ---------------------------------------------- -- nice data set with random data (uniform) -- ---------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- 100 centroids (okay-ish) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- 1000 centroids (very accurate) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; p | ?column? | err ------+----------+----- 0.01 | t | 0.05 | t | 0.1 | t | 0.9 | t | 0.95 | t | 0.99 | t | (6 rows) -- API EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, c, 100, 0.95) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE d -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series CTE x -> Aggregate -> CTE Scan on d -> Aggregate InitPlan 3 (returns $3) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t (16 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, c, 100, 0.95) AS b FROM t) foo; ?column? | ?column? ----------+---------- 0.95 | t (1 row) EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percent_rank(950) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile_of(v, c, 100, 950) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE d -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series CTE x -> Aggregate -> CTE Scan on d -> Aggregate InitPlan 3 (returns $3) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t (16 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percent_rank(950) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile_of(v, c, 100, 950) AS b FROM t) foo; ?column? | ?column? ----------+---------- 950 | t (1 row) -- array of percentiles / values EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, c, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE d -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series CTE x -> Aggregate -> CTE Scan on d -> Aggregate InitPlan 3 (returns $3) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t (16 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, c, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; p | ?column? ------+---------- 0.0 | t 0.95 | t 0.99 | t 1.0 | t (4 rows) EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) AS p FROM d)) p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((select x.p from x)) AS a, unnest(tdigest_percentile_of(v, c, 100, ARRAY[950, 990])) AS b FROM t) foo; QUERY PLAN ------------------------------------------------------ Subquery Scan on foo CTE d -> Gather Workers Planned: 2 -> Nested Loop -> Parallel Seq Scan on t t_1 -> Function Scan on generate_series CTE x -> Aggregate -> Function Scan on unnest f SubPlan 2 -> Aggregate -> CTE Scan on d -> Aggregate InitPlan 4 (returns $4) -> CTE Scan on x -> Gather Workers Planned: 2 -> Parallel Seq Scan on t (19 rows) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) AS p FROM d)) p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((select x.p from x)) AS a, unnest(tdigest_percentile_of(v, c, 100, ARRAY[950, 990])) AS b FROM t) foo; p | ?column? -----+---------- 950 | t 990 | t (2 rows) tdigest-1.4.1/test/sql/000077500000000000000000000000001450426374500147505ustar00rootroot00000000000000tdigest-1.4.1/test/sql/basic.sql000066400000000000000000001055341450426374500165620ustar00rootroot00000000000000\set ECHO none -- disable the notices for the create script (shell types etc.) SET client_min_messages = 'WARNING'; \i tdigest--1.0.0.sql \i tdigest--1.0.0--1.0.1.sql \i tdigest--1.0.1--1.2.0.sql \i tdigest--1.2.0--1.3.0.sql \i tdigest--1.3.0--1.4.0.sql \i tdigest--1.4.0--1.4.1.sql SET client_min_messages = 'NOTICE'; SET extra_float_digits = 0; \set ECHO all -- SRF function implementing a simple deterministict PRNG CREATE OR REPLACE FUNCTION prng(nrows int, seed int = 23982, p1 bigint = 16807, p2 bigint = 0, n bigint = 2147483647) RETURNS SETOF double precision AS $$ DECLARE val INT := seed; BEGIN FOR i IN 1..nrows LOOP val := (val * p1 + p2) % n; RETURN NEXT (val::double precision / n); END LOOP; RETURN; END; $$ LANGUAGE plpgsql; CREATE OR REPLACE FUNCTION random_normal(nrows int, mean double precision = 0.5, stddev double precision = 0.1, minval double precision = 0.0, maxval double precision = 1.0, seed int = 23982, p1 bigint = 16807, p2 bigint = 0, n bigint = 2147483647) RETURNS SETOF double precision AS $$ DECLARE v BIGINT := seed; x DOUBLE PRECISION; y DOUBLE PRECISION; s DOUBLE PRECISION; r INT := nrows; BEGIN WHILE true LOOP -- random x v := (v * p1 + p2) % n; x := 2 * v / n::double precision - 1.0; -- random y v := (v * p1 + p2) % n; y := 2 * v / n::double precision - 1.0; s := x^2 + y^2; IF s != 0.0 AND s < 1.0 THEN s = sqrt(-2 * ln(s) / s); x := mean + stddev * s * x; IF x >= minval AND x <= maxval THEN RETURN NEXT x; r := r - 1; END IF; EXIT WHEN r = 0; y := mean + stddev * s * y; IF y >= minval AND y <= maxval THEN RETURN NEXT y; r := r - 1; END IF; EXIT WHEN r = 0; END IF; END LOOP; END; $$ LANGUAGE plpgsql; DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; ----------------------------------------------------------- -- nice data set with ordered (asc) / evenly-spaced data -- ----------------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 100 centroids (okay-ish) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 1000 centroids (very accurate) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; ------------------------------------------------------------ -- nice data set with ordered (desc) / evenly-spaced data -- ------------------------------------------------------------ -- 10 centroids (tiny) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 100 centroids (okay-ish) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 1000 centroids (very accurate) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(100000,1,-1) s(i)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; ---------------------------------------------------- -- nice data set with random / evenly-spaced data -- ---------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 100 centroids (okay-ish) WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 1000 centroids (very accurate) WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT i / 100000.0 AS x FROM (SELECT generate_series(1,100000) AS i, prng(100000, 49979693) AS x ORDER BY x) foo), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; ---------------------------------------------- -- nice data set with random data (uniform) -- ---------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT x FROM prng(100000) s(x)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT x FROM prng(100000) s(x)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 100 centroids (okay-ish) WITH data AS (SELECT x FROM prng(100000) s(x)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT x FROM prng(100000) s(x)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 1000 centroids (very accurate) WITH data AS (SELECT x FROM prng(100000) s(x)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT x FROM prng(100000) s(x)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -------------------------------------------------- -- nice data set with random data (skewed sqrt) -- -------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 100 centroids (okay-ish) WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 1000 centroids (very accurate) WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(z) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; ------------------------------------------------------- -- nice data set with random data (skewed sqrt+sqrt) -- ------------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 100 centroids (okay-ish) WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 1000 centroids (very accurate) WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT sqrt(sqrt(z)) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; ------------------------------------------------- -- nice data set with random data (skewed pow) -- ------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 100 centroids (okay-ish) WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.005, -- arbitrary threshold of 0.5% (CASE WHEN abs(a - b) < 0.005 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 1000 centroids (very accurate) WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 2) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; ----------------------------------------------------- -- nice data set with random data (skewed pow+pow) -- ----------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 100 centroids (okay-ish) WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 1000 centroids (very accurate) WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM prng(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; ---------------------------------------------------------- -- nice data set with random data (normal distribution) -- ---------------------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)) SELECT p, abs(a - b) < 0.025, -- arbitrary threshold of 2.5% (CASE WHEN abs(a - b) < 0.025 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 10, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 100 centroids (okay-ish) WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 100, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- 1000 centroids (very accurate) WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)) SELECT p, abs(a - b) < 0.001, -- arbitrary threshold of 0.1% (CASE WHEN abs(a - b) < 0.001 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(tdigest_percentile(x, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- make sure the resulting percentiles are in the right order WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)), perc AS (SELECT array_agg((i/100.0)::double precision) AS p FROM generate_series(1,99) s(i)) SELECT * FROM ( SELECT p, a, LAG(a) OVER (ORDER BY p) AS b FROM ( SELECT unnest((SELECT p FROM perc)) AS p, unnest(tdigest_percentile(x, 1000, (SELECT p FROM perc))) AS a FROM data ) foo ) bar WHERE a <= b; -- some basic tests to verify transforming from and to text work -- 10 centroids (tiny) WITH data AS (SELECT i / 100000.0 AS x FROM generate_series(1,100000) s(i)), intermediate AS (SELECT tdigest(x, 10)::text AS intermediate_x FROM data), tdigest_parsed AS (SELECT tdigest_percentile(intermediate_x::tdigest, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS a FROM intermediate), pg_percentile AS (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) AS b FROM data) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM tdigest_parsed, pg_percentile ) foo; -- verify we can store tdigest in a summary table CREATE TABLE intermediate_tdigest (grouping int, summary tdigest); WITH data AS (SELECT row_number() OVER () AS i, pow(z, 4) AS x FROM random_normal(100000) s(z)) INSERT INTO intermediate_tdigest SELECT i % 10 AS grouping, tdigest(x, 100) AS summary FROM data GROUP BY i % 10; WITH data AS (SELECT pow(z, 4) AS x FROM random_normal(100000) s(z)), intermediate AS (SELECT tdigest_percentile(summary, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS a FROM intermediate_tdigest), pg_percentile AS (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) AS b FROM data) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.01 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM intermediate, pg_percentile ) foo; -- verify 'extreme' percentiles for the dataset would not read out of bounds on the centroids WITH data AS (SELECT x FROM generate_series(1,10) AS x) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% given the small dataset and extreme percentiles it is not very accurate (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.99]) AS p, unnest(tdigest_percentile(x, 10, ARRAY[0.01, 0.99])) AS a, unnest(percentile_cont(ARRAY[0.01, 0.99]) WITHIN GROUP (ORDER BY x)) AS b FROM data ) foo; -- check that the computed percentiles are perfectly correlated (don't decrease for higher p values) -- first test on a tiny t-digest with all centroids having count = 1 WITH -- percentiles to compute perc AS (SELECT array_agg((i / 100.0)::double precision) AS percentiles FROM generate_series(1,99) s(i)), -- input data (just 15 points) input_data AS (select i::double precision AS val FROM generate_series(1,15) s(i)) SELECT * FROM ( SELECT p, v AS v1, lag(v, 1) OVER (ORDER BY p) v2 FROM ( SELECT unnest(perc.percentiles) p, unnest(tdigest_percentile(input_data.val, 100, perc.percentiles)) v FROM perc, input_data GROUP BY perc.percentiles ) foo ) bar where v2 > v1; tdigest-1.4.1/test/sql/cast.sql000066400000000000000000000014311450426374500164220ustar00rootroot00000000000000-- test casting to json SELECT cast(tdigest(i / 1000.0, 10) as json) from generate_series(1,1000) s(i); SELECT cast(tdigest(i / 1000.0, 25) as json) from generate_series(1,1000) s(i); SELECT cast(tdigest(i / 1000.0, 100) as json) from generate_series(1,1000) s(i); -- test casting to double precision array SELECT array_agg(round(v::numeric,3)) FROM ( SELECT unnest(cast(tdigest(i / 1000.0, 10) as double precision[])) AS v from generate_series(1,1000) s(i) ) foo; SELECT array_agg(round(v::numeric,3)) FROM ( SELECT unnest(cast(tdigest(i / 1000.0, 25) as double precision[])) AS v from generate_series(1,1000) s(i) ) foo; SELECT array_agg(round(v::numeric,3)) FROM ( SELECT unnest(cast(tdigest(i / 1000.0, 100) as double precision[])) AS v from generate_series(1,1000) s(i) ) foo; tdigest-1.4.1/test/sql/conversions.sql000066400000000000000000000017311450426374500200430ustar00rootroot00000000000000-- test input function, and conversion from old to new format SELECT 'flags 0 count 20 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (7000.000000, 2) (26000.000000, 4) (84000.000000, 7) (51000.000000, 3) (19000.000000, 1) (20000.000000, 1)'::tdigest; -- test input of invalid data -- negative count SELECT 'flags 0 count -20 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (7000.000000, 2) (26000.000000, 4) (84000.000000, 7) (51000.000000, 3) (19000.000000, 1) (20000.000000, 1)'::tdigest; -- mismatching count SELECT 'flags 0 count 21 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (7000.000000, 2) (26000.000000, 4) (84000.000000, 7) (51000.000000, 3) (19000.000000, 1) (20000.000000, 1)'::tdigest; -- incorrectly sorted centroids SELECT 'flags 0 count 20 compression 10 centroids 8 (1000.000000, 1) (2000.000000, 1) (1000.000000, 2) (26000.000000, 4) (84000.000000, 7) (51000.000000, 3) (19000.000000, 1) (20000.000000, 1)'::tdigest; tdigest-1.4.1/test/sql/incremental.sql000066400000000000000000000060411450426374500177730ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- test incremental API (adding values one by one) CREATE TABLE t (d tdigest); INSERT INTO t VALUES (NULL); -- check this produces the same result building the tdigest at once, but we -- need to be careful about feeding the data in the same order, and we must -- not compactify the t-digest after each increment DO LANGUAGE plpgsql $$ DECLARE r RECORD; BEGIN FOR r IN (SELECT i FROM generate_series(1,1000) s(i) ORDER BY md5(i::text)) LOOP UPDATE t SET d = tdigest_add(d, r.i, 100, false); END LOOP; END$$; -- compare the results, but do force a compaction of the incremental result WITH x AS (SELECT i FROM generate_series(1,1000) s(i) ORDER BY md5(i::text)) SELECT (SELECT tdigest(d)::text FROM t) = (SELECT tdigest(x.i, 100)::text FROM x) AS match; -- now try the same thing with bulk incremental update (using arrays) TRUNCATE t; INSERT INTO t VALUES (NULL); DO LANGUAGE plpgsql $$ DECLARE r RECORD; BEGIN FOR r IN (SELECT a, array_agg(i::double precision) AS v FROM (SELECT mod(i,5) AS a, i FROM generate_series(1,1000) s(i) ORDER BY mod(i,5), md5(i::text)) foo GROUP BY a ORDER BY a) LOOP UPDATE t SET d = tdigest_add(d, r.v, 100, false); END LOOP; END$$; -- compare the results, but do force a compaction of the incremental result WITH x AS (SELECT mod(i,5) AS a, i::double precision AS d FROM generate_series(1,1000) s(i) ORDER BY mod(i,5), i) SELECT (SELECT tdigest(d)::text FROM t) = (SELECT tdigest(x.d, 100)::text FROM x); -- now try the same thing with bulk incremental update (using t-digests) TRUNCATE t; INSERT INTO t VALUES (NULL); DO LANGUAGE plpgsql $$ DECLARE r RECORD; BEGIN FOR r IN (SELECT a, tdigest(i,100) AS d FROM (SELECT mod(i,5) AS a, i FROM generate_series(1,1000) s(i) ORDER BY mod(i,5), md5(i::text)) foo GROUP BY a ORDER BY a) LOOP UPDATE t SET d = tdigest_union(d, r.d, false); END LOOP; END$$; -- compare the results, but do force a compaction of the incremental result WITH x AS (SELECT a, tdigest(i,100) AS d FROM (SELECT mod(i,5) AS a, i FROM generate_series(1,1000) s(i) ORDER BY mod(i,5), md5(i::text)) foo GROUP BY a ORDER BY a) SELECT (SELECT tdigest(d)::text FROM t) = (SELECT tdigest(x.d)::text FROM x); tdigest-1.4.1/test/sql/parallel_query.sql000066400000000000000000000123621450426374500205160ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- test parallel query DROP TABLE t; CREATE TABLE t (v double precision, c int, d int); INSERT INTO t SELECT 1000 * random(), 1 + mod(i,7), mod(i,113) FROM generate_series(1,100000) s(i); ANALYZE t; CREATE TABLE t2 (d tdigest); INSERT INTO t2 SELECT tdigest(v, 100) FROM t GROUP BY d; ANALYZE t2; -- individual values EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, 100, 0.95) AS b FROM t) foo; WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, 100, 0.95) AS b FROM t) foo; EXPLAIN (COSTS OFF) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(v, 100, 950) AS b FROM t) foo; SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(v, 100, 950) AS b FROM t) foo; EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(d, 0.95) AS b FROM t2) foo; WITH x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(d, 0.95) AS b FROM t2) foo; EXPLAIN (COSTS OFF) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(d, 950) AS b FROM t2) foo; SELECT 950, abs(a - b) < 0.01 FROM ( SELECT 0.95 AS a, tdigest_percentile_of(d, 950) AS b FROM t2) foo; -- array of percentiles / values EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; EXPLAIN (COSTS OFF) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(v, 100, ARRAY[950, 990])) AS b FROM t) foo; WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(v, 100, ARRAY[950, 990])) AS b FROM t) foo; EXPLAIN (COSTS OFF) WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(d, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t2) foo; WITH x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM t) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(d, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t2) foo; EXPLAIN (COSTS OFF) WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(d, ARRAY[950, 990])) AS b FROM t2) foo; WITH x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) FROM t)) AS p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile_of(d, ARRAY[950, 990])) AS b FROM t2) foo; tdigest-1.4.1/test/sql/trimmed_aggregates.sql000066400000000000000000000101751450426374500213270ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- check trimmed mean (from raw data) -- we compare the result to a range, to deal with the randomness WITH data AS (SELECT random() AS r FROM generate_series(1,10000) AS x) SELECT tdigest_avg(data.r, 50, 0.1, 0.9) between 0.45 and 0.55 AS mean_10_90, tdigest_avg(data.r, 50, 0.25, 0.75) between 0.45 and 0.55 AS mean_25_75, tdigest_avg(data.r, 50, 0.0, 0.5) between 0.2 and 0.3 AS mean_0_50, tdigest_avg(data.r, 50, 0.5, 1.0) between 0.7 and 0.8 AS mean_50_100 FROM data; WITH data AS (SELECT random() AS r, (1 + (3 * random())::int) AS c FROM generate_series(1,10000) AS x) SELECT tdigest_avg(data.r, data.c, 100, 0.1, 0.9) between 0.45 and 0.55 AS mean_10_90, tdigest_avg(data.r, data.c, 100, 0.25, 0.75) between 0.45 and 0.55 AS mean_25_75, tdigest_avg(data.r, data.c, 100, 0.0, 0.5) between 0.2 and 0.3 AS mean_0_50, tdigest_avg(data.r, data.c, 100, 0.5, 1.0) between 0.7 and 0.8 AS mean_50_100 FROM data; -- check trimmed mean (from pracalculated tdigest) -- we compare the result to a range, to deal with the randomness WITH data AS (SELECT tdigest(random(), 50) AS d FROM generate_series(1,10000) AS x) SELECT tdigest_avg(data.d, 0.1, 0.9) between 0.45 and 0.55 AS mean_10_90, tdigest_avg(data.d, 0.25, 0.75) between 0.45 and 0.55 AS mean_25_75, tdigest_avg(data.d, 0.0, 0.5) between 0.2 and 0.3 AS mean_0_50, tdigest_avg(data.d, 0.5, 1.0) between 0.7 and 0.8 AS mean_50_100 FROM data; -- check trimmed sum (from raw data) -- we compare the result to a range, to deal with the randomness WITH data AS (SELECT random() AS r FROM generate_series(1,10000) AS x) SELECT tdigest_sum(data.r, 50, 0.1, 0.9) between 8000 * 0.45 and 8000 * 0.55 AS sum_10_90, tdigest_sum(data.r, 50, 0.25, 0.75) between 5000 * 0.45 and 5000 * 0.55 AS sum_25_75, tdigest_sum(data.r, 50, 0.0, 0.5) between 5000 * 0.2 and 5000 * 0.3 AS sum_0_50, tdigest_sum(data.r, 50, 0.5, 1.0) between 5000 * 0.7 and 5000 * 0.8 AS sum_50_100 FROM data; WITH data AS (SELECT random() AS r, (1 + (3 * random())::int) AS c FROM generate_series(1,10000) AS x) SELECT tdigest_sum(data.r, data.c, 100, 0.1, 0.9) between 20000 * 0.45 and 20000 * 0.55 AS sum_10_90, tdigest_sum(data.r, data.c, 100, 0.25, 0.75) between 12500 * 0.45 and 12500 * 0.55 AS sum_25_75, tdigest_sum(data.r, data.c, 100, 0.0, 0.5) between 12500 * 0.2 and 12500 * 0.3 AS sum_0_50, tdigest_sum(data.r, data.c, 100, 0.5, 1.0) between 12500 * 0.7 and 12500 * 0.8 AS sum_50_100 FROM data; -- check trimmed sum (from pracalculated tdigest) -- we compare the result to a range, to deal with the randomness WITH data AS (SELECT tdigest(random(), 50) AS d FROM generate_series(1,10000) AS x) SELECT tdigest_sum(data.d, 0.1, 0.9) between 8000 * 0.45 and 8000 * 0.55 AS sum_10_90, tdigest_sum(data.d, 0.25, 0.75) between 5000 * 0.45 and 5000 * 0.55 AS sum_25_75, tdigest_sum(data.d, 0.0, 0.5) between 5000 * 0.2 and 5000 * 0.3 AS sum_0_50, tdigest_sum(data.d, 0.5, 1.0) between 5000 * 0.7 and 5000 * 0.8 AS sum_50_100 FROM data; WITH data AS (SELECT tdigest(random(), 50) AS d FROM generate_series(1,10000) AS x) SELECT tdigest_digest_sum(data.d, 0.05, 0.95) between 9000 * 0.45 and 9000 * 0.55 AS sum_05_95, tdigest_digest_avg(data.d, 0.05, 0.95) between 0.45 and 0.55 AS mean_05_95 FROM data; tdigest-1.4.1/test/sql/value_count_api.sql000066400000000000000000000157201450426374500206530ustar00rootroot00000000000000DO $$ DECLARE v_version numeric; BEGIN SELECT substring(setting from '\d+')::numeric INTO v_version FROM pg_settings WHERE name = 'server_version'; -- GUCs common for all versions PERFORM set_config('extra_float_digits', '0', false); PERFORM set_config('parallel_setup_cost', '0', false); PERFORM set_config('parallel_tuple_cost', '0', false); PERFORM set_config('max_parallel_workers_per_gather', '2', false); -- 9.6 used somewhat different GUC name for relation size IF v_version < 10 THEN PERFORM set_config('min_parallel_relation_size', '1kB', false); ELSE PERFORM set_config('min_parallel_table_scan_size', '1kB', false); END IF; -- in 14 disable Memoize nodes, to make explain more consistent IF v_version >= 14 THEN PERFORM set_config('enable_memoize', 'off', false); END IF; END; $$ LANGUAGE plpgsql; -- API select tdigest_percentile(value, count, 100, 0.95) from (values (47325940488,1), (15457695432,2), (6889790700,3), (4188763788,4), (2882932224,5), (2114815860,6), (1615194324,7), (2342114568,9), (1626471924,11), (1660755408,14), (1143728292,17), (1082582424,21), (911488284,26), (728863908,32), (654898692,40), (530198076,50), (417883440,62), (341452344,77), (274579584,95), (231921120,118), (184091820,146), (152469828,181), (125634972,224), (107059704,278), (88746120,345), (73135668,428), (61035756,531), (50683320,658), (42331824,816), (35234400,1012), (29341356,1255), (24290928,1556), (20284668,1929), (17215908,2391), (14737488,2964), (12692772,3674), (11220732,4555), (9787584,5647), (8148420,7000), (6918612,8678), (6015000,10758), (5480316,13336), (5443356,16532), (4535616,20494), (3962316,25406), (3914484,31495), (3828108,39043), (3583536,48400), (4104120,60000), (166024740,2147483647)) foo (count, value); ---------------------------------------------- -- nice data set with random data (uniform) -- ---------------------------------------------- -- 10 centroids (tiny) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.1, -- arbitrary threshold of 10% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 10, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; -- 100 centroids (okay-ish) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 100, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; -- 1000 centroids (very accurate) WITH data AS (SELECT prng(1000) x, prng(1000, 29823218) cnt), data_expanded AS (SELECT x FROM (SELECT x, generate_series(1, (10 + 100 * cnt)::int) FROM data) foo ORDER BY random()) SELECT p, abs(a - b) < 0.01, -- arbitrary threshold of 1% (CASE WHEN abs(a - b) < 0.1 THEN NULL ELSE (a - b) END) AS err FROM ( SELECT unnest(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) AS p, unnest(a) AS a, unnest(b) AS b FROM (SELECT percentile_cont(ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) WITHIN GROUP (ORDER BY x) a FROM data_expanded) foo, (SELECT tdigest_percentile(x, (10 + 100 * cnt)::int, 1000, ARRAY[0.01, 0.05, 0.1, 0.9, 0.95, 0.99]) b FROM data) bar ) baz; -- API EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, c, 100, 0.95) AS b FROM t) foo; WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(0.95) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 0.95, abs(a - b) / 1000 < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile(v, c, 100, 0.95) AS b FROM t) foo; EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percent_rank(950) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile_of(v, c, 100, 950) AS b FROM t) foo; WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percent_rank(950) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT 950, abs(a - b) < 0.01 FROM ( SELECT (SELECT p FROM x) AS a, tdigest_percentile_of(v, c, 100, 950) AS b FROM t) foo; -- array of percentiles / values EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, c, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT percentile_disc(ARRAY[0.0, 0.95, 0.99, 1.0]) WITHIN GROUP (ORDER BY v) AS p FROM d) SELECT p, abs(a - b) / 1000 < 0.01 FROM ( SELECT unnest(ARRAY[0.0, 0.95, 0.99, 1.0]) p, unnest((SELECT p FROM x)) AS a, unnest(tdigest_percentile(v, c, 100, ARRAY[0.0, 0.95, 0.99, 1.0])) AS b FROM t) foo; EXPLAIN (COSTS OFF) WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) AS p FROM d)) p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((select x.p from x)) AS a, unnest(tdigest_percentile_of(v, c, 100, ARRAY[950, 990])) AS b FROM t) foo; WITH d AS (SELECT t.* FROM t, LATERAL generate_series(1,t.c)), x AS (SELECT array_agg((SELECT percent_rank(f) WITHIN GROUP (ORDER BY v) AS p FROM d)) p FROM unnest(ARRAY[950, 990]) f) SELECT p, abs(a - b) < 0.01 FROM ( SELECT unnest(ARRAY[950, 990]) AS p, unnest((select x.p from x)) AS a, unnest(tdigest_percentile_of(v, c, 100, ARRAY[950, 990])) AS b FROM t) foo;