| Port details |
- pg_incremental Incremental Data Processing in PostgreSQL
- 1.3.0 databases
=0 1.0.1Version of this port present on the latest quarterly branch. - Maintainer: tz@FreeBSD.org
 - Port Added: 2024-12-23 21:39:04
- Last Update: 2025-11-12 22:00:39
- Commit Hash: e10c730
- License: PostgreSQL
- WWW:
- https://github.com/CrunchyData/pg_incremental
- Description:
- pg_incremental is a simple extension that helps you do fast, reliable,
incremental batch processing in PostgreSQL.
With pg_incremental, you define a pipeline with a parameterized query. The
pipeline is executed for all existing data when created, and then periodically
executed. If there is new data, the query is executed with parameter values that
correspond to the new data. Depending on the type of pipeline, the parameters
could reflect a new range of sequence values, a new time range, or a new file.
¦ ¦ ¦ ¦ 
- Manual pages:
- FreshPorts has no man page information for this port.
- pkg-plist: as obtained via:
make generate-plist - USE_RC_SUBR (Service Scripts)
- no SUBR information found for this port
- Dependency lines:
-
- pg_incremental>0:databases/pg_incremental
- To install the port:
- cd /usr/ports/databases/pg_incremental/ && make install clean
- To add the package, run one of these commands:
- pkg install databases/pg_incremental
- pkg install pg_incremental
NOTE: If this package has multiple flavors (see below), then use one of them instead of the name specified above.- PKGNAME: pg_incremental
- Flavors: there is no flavor information for this port.
- distinfo:
- TIMESTAMP = 1762943675
SHA256 (CrunchyData-pg_incremental-v1.3.0_GH0.tar.gz) = 8e9c6bcc9975d3e5425080a9b9152aecf6028c4d8b5e4bb66c39020b91dcaffc
SIZE (CrunchyData-pg_incremental-v1.3.0_GH0.tar.gz) = 20925
Packages (timestamps in pop-ups are UTC):
- Dependencies
- NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered.
- Build dependencies:
-
- gmake>=4.4.1 : devel/gmake
- postgres : databases/postgresql17-server
- Runtime dependencies:
-
- pg_cron>=1.6.4 : databases/pg_cron
- postgres : databases/postgresql17-server
- There are no ports dependent upon this port
Configuration Options:
- No options to configure
- Options name:
- databases_pg_incremental
- USES:
- gmake pgsql:10+
- FreshPorts was unable to extract/find any pkg message
- Master Sites:
|
| Commit History - (may be incomplete: for full details, see links to repositories near top of page) |
| Commit | Credits | Log message |
1.3.0 12 Nov 2025 22:00:39
    |
Torsten Zuehlsdorff (tz)  |
databases/pg_incremental: Upgrade from 1.0.1 to 1.3.0
Compiled Changelog:
* Fixes a bug that prevented insert..select pipelines
* Fixes a bug that caused file list pipelines to repeat first file
* Add batched mode for file list pipeline
* Add incremental.default_file_list_function setting
* Fixes a bug that prevented file list pipelines from being refreshed
* Fixes bug that could cause batched file list pipelines to crash
* Add a max_batch_size argument to file list pipelines
* Improve performance of batched file list pipelines
* Adjust the default schedule of file list pipelines to every 15 minutes
* Adds an incremental.skip_file function to use for erroneuous files in file
pipelines
* Removes the hard dependency on pg_cron at CREATE EXTENSION time
Taken from:
* https://github.com/CrunchyData/pg_incremental/releases/tag/v1.1.0
* https://github.com/CrunchyData/pg_incremental/releases/tag/v1.1.1
* https://github.com/CrunchyData/pg_incremental/releases/tag/v1.2.0
* https://github.com/CrunchyData/pg_incremental/releases/tag/v1.3.0
Sponsored by: OTTRIA GmbH |
1.0.1 23 Dec 2024 21:38:36
    |
Torsten Zuehlsdorff (tz)  |
databases/pg_incremental: New Port
pg_incremental is a simple extension that helps you do fast, reliable,
incremental batch processing in PostgreSQL.
With pg_incremental, you define a pipeline with a parameterized query. The
pipeline is executed for all existing data when created, and then periodically
executed. If there is new data, the query is executed with parameter values
that
correspond to the new data. Depending on the type of pipeline, the parameters
could reflect a new range of sequence values, a new time range, or a new file.
Sponsored by: P. Variablis GmbH |