Skip to content

Database Tests

The database tests are sometimes specific to a database, sometimes general. But they all have the same schema for the connection.

Data

connection:
#   host: localhost         # host of database
#   port: 5432              # port to connect to
#   user: alex              # if needed
#   password: IdontKnow     # if needed
    database: my_db         # database name

This can be defined in any test or fix of the Database module. There are data and system based checks.

Warning

Some of the database tests like the postgres analyzation will monitor for a short time and therefore take some time. If you run multiple of this tests keep in mind to increase the concurrency to not block other tests too much.

Info

Native postgres support can be enabled by installing libpq-dev on the server and adding pg-native npm module:

sudo apt install libpq-dev
npm install pg-native

sql/record

This will check a specific result from a SQL query.

Data

data:
    client: pg                      # Possible clients are: pg, mysql, mysql2, mssql, oracledb, sqlite3
    version: 5.9                    # optional for pg, mysql, mysql2, oracledb
    connection: ...                 # how to reach the database (see above)
    searchPath: [alinex]            # only allowed in pg
    sql: select count(*) as num from my_records
    warn:
        time: 10 s                  # maximum time of query
        num:                        # any field from sql
            # for numbers
            min: 5
            max: 10
    #       # for date/time values
    #       before: 2022-01-01
    #       after: 2021-01-01
    #       # for string values
    #       allow: /OK/
    #       disallow: /ERROR/

Hint

Only select statements are possible here. Best practice is to use a defined name for each returning column.

Result values are:

  • time - time in ms to open connection and get a ping response
  • <field> - each field from the sql

Fixes

No additional analyzation here.

Repair

No automatic fixing is possible here.

Result

Console output
   OK SQL Query (database.sqlrecord)
    Dieser Test prüft Inhalte der Datenbank pg@localhost/postgres.
    REQUEST  select 5 as num
    RETURN   { num: 5 }
    RESULT   { num: 5, time: 15 }
    VALIDATE Prüfung der Werte auf Fehler
                Ein Datenobjekt mit Wertepaaren.
    VALIDATE Prüfung der Werte auf Warnungen
                Ein Datenobjekt mit Wertepaaren. Die Werte haben das folgende Format:
                -   num: NumberSchema
                    Ein nummerischer Wert. Der Wert muss gleich oder zwischen 1 und 8 sein.
                -   time: Die Zeit für Abfragen der Daten darf nicht mehr als 100 ms betragen.
                    Ein nummerischer Wert. Der Wert muss kleiner oder gleich 100 sein.

postgres/cluster

A PostgreSQL server will be checked. This includes lots of health parameters.

Data

data:
    connection: ...                 # how to reach the database (see above)
    measureTime: 30s                # time to measure changes
    warn:
        time: 100                   # maximum connection time (ms)
        uptime: 1 h                 # minimum time since last boot
        conn: 100                   # maximum number of connections
        conn_free: 100              # minimum number of free connections
        transaction_age: 3 h        # maximum time for transactions
        query_age: 1 h              # maximum time for queries

Attention

This has to be called as superuser with all rights to be able to analyze the whole cluster.

Result values are:

  • time - time in ms to open connection and get a ping response
  • connect - flag if connection was possible
  • version - postgres version like 130002
  • uptime - time in seconds since last database start
  • cluster_name - name of the cluster
  • work_mem - the maximum memory to be used for query workspaces
  • work_mem_maintenance - the maximum memory to be used for maintenance operations
  • shared_buffers - the number of shared memory buffers used by the server
  • cache_size - the planner's assumption about the total size of the data caches
  • conn_max - maximum number of connections
  • conn_super - reserved superuser connections
  • replica - true if this is a replica (not master)
  • replica_lag - time in seconds this server got last update
  • user - number of connected users
  • user_names - list of connected users
  • user_ip - list of ip addresses connected
  • user_app - list of connected applications
  • transaction_age - age of oldest active transaction in seconds
  • query_age - age of oldest running query in seconds
  • conn - number of current connections
  • conn_free - number of free connections to be used
  • conn_active - number of active connections
  • conn_idle - number of idle connections
  • conn_wait - number of waiting connections
  • extensions - list of installed connections
  • tablespaces - number of user defined table spaces
  • tablespaces_size- size in bytes for all user defined tablespaces together
  • commit_ratio - rollbacks relative to commits (should be > 95%)

And the following values are growing statistics which are growing and have an additional *_diff entry with the difference per second:

  • commits - number of commits
  • rollbacks - number of rollbacks
  • block_read - number of disk blocks read
  • block_cache - number of disk blocks read from buffer cache
  • block_ratio - cache hit ratio (hit_ration should be > 90%)
  • disk_temp - number of temporary files created
  • disk_read - time spend to read data in milliseconds
  • disk_write - time spend to write data in milliseconds
  • returned - number of records returned
  • fetched - number of records fetched
  • inserted - number of records inserted
  • updated - number of records updated
  • deleted - number of records deleted
  • conflicts - number of conflicts
  • deadlocks - number of deadlocks
  • stats_age - time to calculate diffs (not stored)

Fixes

No additional analyzation here.

Repair

No automatic fixing is possible here.

Result

Console output
   OK Postgres Cluster (database.postgres)
    Dieser Test prüft den PostgreSQL Cluster unter pg://localhost:5432/alex.
    REQUEST  SHOW server_version_num
    RETURN   { server_version_num: '130002' }
    REQUEST  select * from (
              select
                extract(epoch from current_timestamp - pg_postmaster_start_time()) as uptime,
              current_setting('cluster_name') as cluster_name,
              current_setting('work_mem') as work_mem,
              current_setting('maintenance_work_mem') as work_mem_maintenance,
              current_setting('shared_buffers') as shared_buffers,
              current_setting('effective_cache_size') as effective_cache_size,
              current_setting('max_connections') as conn_max,
              current_setting('superuser_reserved_connections') as conn_super,
                pg_is_in_recovery() as replica,
                case when pg_is_in_recovery() then extract(epoch from now()-pg_last_xact_replay_timestamp()) end as replica_lag
            ) as core, (
              select
                (select count(*)::integer from pg_catalog.pg_user) as user,
                string_agg(distinct usename, ', ') as user_names,
                string_agg(distinct case when application_name = '' then null else application_name end, ', ') as app_names,
                extract(epoch from current_timestamp - min(xact_start)) as transaction_age,
                extract(epoch from current_timestamp - min(case when state = 'active' then query_start else null end)) as query_age,
                count(*)::integer as conn,
              count(case when state = 'active' then 1 end)::integer as conn_active,
              count(case when state != 'active' then 1 end)::integer as conn_idle,
              count(case when wait_event is not null and wait_event not in('ClientRead') then 1 end)::integer as wait
              from pg_catalog.pg_stat_activity
            ) as activity, (
              select string_agg(extname, ', ') as extensions from pg_extension
            ) as extension, (
              select
                xact_commit as commits,
                xact_rollback as rollbacks,
                blks_read as disk_block,
                blks_hit as disk_cache,
                temp_files as disk_temp,
                blk_read_time as disk_read,
                blk_write_time as disk_write,
                tup_returned as returned,
                tup_fetched as fetched,
                tup_inserted as inserted,
                tup_updated as updated,
                tup_deleted as deleted,
                conflicts,
                deadlocks,
                extract(epoch from now() - stats_reset) as stats_age -- in seconds
            from pg_stat_database
            ) as dynamic;
    RETURN   {
              uptime: 103585.138003,
              cluster_name: '',
              work_mem: '4MB',
              work_mem_maintenance: '64MB',
              shared_buffers: '128MB',
              effective_cache_size: '4GB',
              conn_max: '100',
              conn_super: '3',
              replica: false,
              replica_lag: null,
              user: 2,
              user_names: 'alex, postgres',
              app_names: 'PostgreSQL JDBC Driver',
              transaction_age: 0,
              query_age: 0,
              conn: 11,
              conn_active: 1,
              conn_idle: 5,
              wait: 0,
              extensions: 'plpgsql',
              commits: '0',
              rollbacks: '0',
              disk_block: '106',
              disk_cache: '100845',
              disk_temp: '0',
              disk_read: 0,
              disk_write: 0,
              returned: '36417',
              fetched: '22433',
              inserted: '0',
              updated: '0',
              deleted: '0',
              conflicts: '0',
              deadlocks: '0',
              stats_age: 691581.541306
            }
    REQUEST  select
                xact_commit as commits,
                xact_rollback as rollbacks,
                blks_read as disk_block,
                blks_hit as disk_cache,
                temp_files as disk_temp,
                blk_read_time as disk_read,
                blk_write_time as disk_write,
                tup_returned as returned,
                tup_fetched as fetched,
                tup_inserted as inserted,
                tup_updated as updated,
                tup_deleted as deleted,
                conflicts,
                deadlocks,
                extract(epoch from now() - stats_reset) as stats_age -- in seconds
            from pg_stat_database
    RETURN   {
              commits: '0',
              rollbacks: '0',
              disk_block: '106',
              disk_cache: '100864',
              disk_temp: '0',
              disk_read: 0,
              disk_write: 0,
              returned: '36425',
              fetched: '22437',
              inserted: '0',
              updated: '0',
              deleted: '0',
              conflicts: '0',
              deadlocks: '0',
              stats_age: 691591.562024
            }
    RESULT   {
              connect: true,
              version: 130002,
              time: 21,
              uptime: 103585.138003,
              cluster_name: '',
              work_mem: 4194304,
              work_mem_maintenance: 67108864,
              shared_buffers: 134217728,
              effective_cache_size: 4294967296,
              conn_max: 100,
              conn_super: 3,
              replica: false,
              replica_lag: null,
              user: 2,
              user_names: [ 'alex', 'postgres' ],
              app_names: [ 'PostgreSQL JDBC Driver' ],
              transaction_age: 0,
              query_age: 0,
              conn: 11,
              conn_active: 1,
              conn_idle: 5,
              wait: 0,
              extensions: [ 'plpgsql' ],
              commits: 0,
              rollbacks: 0,
              disk_block: 0,
              disk_cache: 1.8960717186106686,
              disk_temp: 0,
              disk_read: 0,
              disk_write: 0,
              returned: 0.7983459867834395,
              fetched: 0.39917299339171974,
              inserted: 0,
              updated: 0,
              deleted: 0,
              conflicts: 0,
              deadlocks: 0,
              conn_free: 89
            }

postgres/database

A PostgreSQL database will be checked. This includes lots of health parameters.

Data

data:
    connection: ...                 # how to reach the database (see above)
    measureTime: 30s                # time to measure changes
    warn:
        transaction_age: 3 h        # maximum time for transactions
        query_age: 1 h              # maximum time for queries

Attention

This has to be called as superuser with all rights to be able to analyze the whole database.

Result values are:

  • conn_max - maximum number of connections (if defined)
  • size - size of all databases together in bytes
  • size_table - size of all tables together in bytes
  • size_data - size of all data elements in tables in bytes
  • size_free - size of all free data within tables in bytes
  • size_index - size of all indices in bytes
  • tables - number of tables
  • tables_unused - number of unused tables
  • rows - estimated number of data rows
  • dead - estimated number of dead rows
  • indexes - number of indexes
  • indexes_unused - number of unused indexes
  • views - number of views
  • matviews - number of matviews
  • functions - number of user functions
  • sequences - number of sequences
  • schemas - number of sub schemas
  • user - number of connected users
  • user_names - list of connected users
  • user_ip - list of ip addresses connected
  • user_app - list of connected applications
  • transaction_age - age of oldest active transaction in seconds
  • query_age - age of oldest running query in seconds
  • conn - number of current connections
  • conn_free - number of free connections to be used (if limit defined)
  • conn_active - number of active connections
  • conn_idle - number of idle connections
  • conn_wait - number of waiting connections

And the following values are growing statistics which are growing and have an additional *_diff entry with the difference per second:

  • commits - number of commits
  • rollbacks - number of rollbacks
  • commit_ratio - rollbacks relative to commits (should be > 95%)
  • block_read - number of disk blocks read
  • block_cache - number of disk blocks read from buffer cache
  • block_ratio - cache hit ratio (hit_ration should be > 90%)
  • disk_temp - number of temporary files created
  • disk_read - time spend to read data in milliseconds
  • disk_write - time spend to write data in milliseconds
  • returned - number of records returned
  • fetched - number of records fetched
  • inserted - number of records inserted
  • updated - number of records updated
  • deleted - number of records deleted
  • conflicts - number of conflicts
  • deadlocks - number of deadlocks
  • stats_age - time to calculate diffs (not stored)

Fixes

No additional analyzation here.

Repair

No automatic fixing is possible here.

Result

Console output
   OK Postgres Database (database.postgres.database)
    Dieser Test prüft den PostgreSQL Datenbank unter pg://localhost:5432/postgres.
    REQUEST  SHOW server_version_num
    RETURN   { server_version_num: '130002' }
    REQUEST  select * from (
              select
                pg_database_size(current_database()) as size,
                sum(pg_total_relation_size(relid)-pg_indexes_size(relid)) as size_table,
                sum(pg_relation_size(relid, 'main')) as size_data,
                sum(pg_relation_size(relid, 'fsm')) as size_free,
                sum(pg_indexes_size(relid)) as size_index,
              count(*)::integer as tables,
                sum(n_live_tup) as rows,
                sum(n_dead_tup) as dead
              from pg_stat_user_tables
            ) as size, (
              select count(*)::integer as indexes
              from pg_stat_user_indexes
            ) as indexes, (
              select count(*)::integer as views
              from pg_views where schemaname NOT IN ('pg_catalog', 'information_schema')
            ) as views, (
              select count(*)::integer as matviews
                from pg_matviews where schemaname NOT IN ('pg_catalog', 'information_schema')
            ) as matviews, (
                select count(*)::integer as functions
                from pg_catalog.pg_proc p
                  LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
                WHERE pg_catalog.pg_function_is_visible(p.oid)
                  AND n.nspname <> 'pg_catalog'
                  AND n.nspname <> 'information_schema'
                  and p.probin is null
            ) as functions, (
                select count(*)::integer as sequences
                from pg_sequences where schemaname NOT IN ('pg_catalog', 'information_schema')
            ) as sequences, (
              select count(*)::integer as schemas
              from pg_catalog.pg_namespace where nspname not like 'pg_%' and nspname != 'information_schema'
            ) as schemas, (
              select
                (select count(*)::integer from pg_catalog.pg_user) as user,
                string_agg(distinct usename, ', ') as user_names,
                string_agg(distinct case when application_name = '' then null else application_name end, ', ') as app_names,
                extract(epoch from current_timestamp - min(xact_start)) as transaction_age,
                extract(epoch from current_timestamp - min(case when state = 'active' then query_start else null end)) as query_age,
                count(*)::integer as conn,
              count(case when state = 'active' then 1 end)::integer as conn_active,
              count(case when state != 'active' then 1 end)::integer as conn_idle,
              count(case when wait_event is not null and wait_event not in('ClientRead') then 1 end)::integer as wait
              from pg_catalog.pg_stat_activity
            ) as activity, (
              select string_agg(extname, ', ') as extensions from pg_extension
            ) as extension, (
              select
                xact_commit as commits,
                xact_rollback as rollbacks,
                blks_read as disk_block,
                blks_hit as disk_cache,
                temp_files as disk_temp,
                blk_read_time as disk_read,
                blk_write_time as disk_write,
                tup_returned as returned,
                tup_fetched as fetched,
                tup_inserted as inserted,
                tup_updated as updated,
                tup_deleted as deleted,
                conflicts,
                deadlocks,
                extract(epoch from now() - stats_reset) as stats_age -- in seconds
            from pg_stat_database where datname = current_database()
            ) as dynamic;
    RETURN   {
              size: '9556527',
              size_table: '393216',
              size_data: '65536',
              size_free: '196608',
              size_index: '196608',
              tables: 8,
              rows: '144',
              dead: '149',
              indexes: 12,
              views: 0,
              matviews: 0,
              functions: 0,
              sequences: 0,
              schemas: 1,
              user: 2,
              user_names: 'alex, postgres',
              app_names: null,
              transaction_age: 0,
              query_age: 0,
              conn: 6,
              conn_active: 1,
              conn_idle: 0,
              wait: 0,
              extensions: 'plpgsql',
              commits: '14489',
              rollbacks: '59',
              disk_block: '3323',
              disk_cache: '1044084',
              disk_temp: '0',
              disk_read: 0,
              disk_write: 0,
              returned: '5307742',
              fetched: '590670',
              inserted: '235',
              updated: '1389',
              deleted: '51',
              conflicts: '0',
              deadlocks: '0',
              stats_age: 821187.091487
            }
    REQUEST  select
                xact_commit as commits,
                xact_rollback as rollbacks,
                blks_read as disk_block,
                blks_hit as disk_cache,
                temp_files as disk_temp,
                blk_read_time as disk_read,
                blk_write_time as disk_write,
                tup_returned as returned,
                tup_fetched as fetched,
                tup_inserted as inserted,
                tup_updated as updated,
                tup_deleted as deleted,
                conflicts,
                deadlocks,
                extract(epoch from now() - stats_reset) as stats_age -- in seconds
            from pg_stat_database where datname = current_database()
    RETURN   {
              commits: '14489',
              rollbacks: '59',
              disk_block: '3323',
              disk_cache: '1044084',
              disk_temp: '0',
              disk_read: 0,
              disk_write: 0,
              returned: '5307742',
              fetched: '590670',
              inserted: '235',
              updated: '1389',
              deleted: '51',
              conflicts: '0',
              deadlocks: '0',
              stats_age: 821197.190714
            }
    RESULT   {
              connect: true,
              version: 130002,
              time: 25,
              size: 9556527,
              size_table: 393216,
              size_data: 65536,
              size_free: 196608,
              size_index: 196608,
              tables: 8,
              rows: 144,
              dead: 149,
              indexes: 12,
              views: 0,
              matviews: 0,
              functions: 0,
              sequences: 0,
              schemas: 1,
              user: 2,
              user_names: [ 'alex', 'postgres' ],
              app_names: [],
              transaction_age: 0,
              query_age: 0,
              conn: 6,
              conn_active: 1,
              conn_idle: 0,
              wait: 0,
              extensions: [ 'plpgsql' ],
              commits: 0,
              rollbacks: 0,
              disk_block: 0,
              disk_cache: 0,
              disk_temp: 0,
              disk_read: 0,
              disk_write: 0,
              returned: 0,
              fetched: 0,
              inserted: 0,
              updated: 0,
              deleted: 0,
              conflicts: 0,
              deadlocks: 0,
              conn_free: NaN
            }

postgres/schema

A PostgreSQL schema will be checked. This includes lots of health parameters.

Data

data:
    connection: ...                 # how to reach the database (see above)
    schema: my_schema               # what to analyze
    measureTime: 30s                # time to measure changes

Attention

This has to be called as superuser with all rights to be able to analyze the whole schema.

Result values are:

  • size - size of all schema tables with index together in bytes
  • size_table - size of all tables together in bytes
  • size_data - size of all data elements in tables in bytes
  • size_free - size of all free data within tables in bytes
  • size_index - size of all indices in bytes
  • tables - number of tables
  • tables_unused - number of unused tables
  • rows - estimated number of data rows
  • dead - estimated number of dead rows
  • indexes - number of indexes
  • indexes_unused - number of unused indexes
  • views - number of views
  • matviews - number of matviews
  • functions - number of user functions
  • sequences - number of sequences

And the following values are growing statistics which are growing and have an additional *_diff entry with the difference per second:

  • inserted - number of records inserted
  • updated - number of records updated
  • deleted - number of records deleted
  • stats_age - time to calculate diffs (not stored)

Fixes

No additional analyzation here.

Repair

No automatic fixing is possible here.

Result

Console output
   OK Postgres Schema (database.postgres.schema)
    Dieser Test prüft das PostgreSQL Schema dvb_publication auf pg://operator@office.database.office.dvb:5432/dvb_manage.
    REQUEST  select * from (
                select
              sum(pg_total_relation_size(relid)) as size,
                sum(pg_total_relation_size(relid)-pg_indexes_size(relid)) as size_table,
                sum(pg_relation_size(relid, 'main')) as size_data,
                sum(pg_relation_size(relid, 'fsm')) as size_free,
                sum(pg_indexes_size(relid)) as size_index,
              count(*)::integer as tables,
              count(case when idx_tup_fetch + seq_tup_read = 0 then 1 end)::integer as tables_unused,
                sum(n_live_tup) as rows,
                sum(n_dead_tup) as dead
              from pg_stat_user_tables where schemaname = 'dvb_publication'
            ) as size, (
              select count(*)::integer as indexes,
              count(case when idx_scan = 0 then 1 end)::integer as indexes_unused
              from pg_stat_user_indexes where schemaname = 'dvb_publication'
            ) as indexes, (
              select count(*)::integer as views
              from pg_views where schemaname = 'dvb_publication'
            ) as views, (
              select count(*)::integer as matviews
                from pg_matviews where schemaname = 'dvb_publication'
            ) as matviews, (
                select count(*)::integer as functions
                from pg_catalog.pg_proc p
                  LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
                WHERE pg_catalog.pg_function_is_visible(p.oid)
                  AND n.nspname = 'dvb_publication'
                  and p.probin is null
            ) as functions, (
                select count(*)::integer as sequences
                from pg_sequences where schemaname NOT IN ('pg_catalog', 'information_schema')
            ) as sequences, (
              select * from (
              select
                n_tup_ins as inserted,
                n_tup_upd as updated,
                n_tup_del as deleted
              from pg_stat_user_tables where schemaname = 'dvb_publication'
            ) as values, (
              select
              extract(epoch from now() - stats_reset) as stats_age -- in seconds
              from pg_stat_database where datname = current_database()
            ) as stat
            ) as dynamic;
    RETURN   {
              size: '74579968',
              size_table: '74547200',
              size_data: '70631424',
              size_free: '73728',
              size_index: '32768',
              tables: 7,
              tables_unused: 0,
              rows: '320381',
              dead: '0',
              indexes: 1,
              indexes_unused: 0,
              views: 33,
              matviews: 5,
              functions: 2,
              sequences: 63,
              inserted: '0',
              updated: '0',
              deleted: '0',
              stats_age: 13421348.62007
            }
    REQUEST  select * from (
              select
                n_tup_ins as inserted,
                n_tup_upd as updated,
                n_tup_del as deleted
              from pg_stat_user_tables where schemaname = 'dvb_publication'
            ) as values, (
              select
              extract(epoch from now() - stats_reset) as stats_age -- in seconds
              from pg_stat_database where datname = current_database()
            ) as stat
    RETURN   {
              inserted: '0',
              updated: '0',
              deleted: '0',
              stats_age: 13421379.312476
            }
    RESULT   {
              size: 74579968,
              size_table: 74547200,
              size_data: 70631424,
              size_free: 73728,
              size_index: 32768,
              tables: 7,
              tables_unused: 0,
              rows: 320381,
              dead: 0,
              indexes: 1,
              indexes_unused: 0,
              views: 33,
              matviews: 5,
              functions: 2,
              sequences: 63,
              inserted: 0,
              updated: 0,
              deleted: 0
            }

postgres/table

A PostgreSQL database table will be checked.

Data

data:
    connection: ...                 # how to reach the database (see above)
    schema: my_schema               # what to analyze
    table: my_table                 
    measureTime: 30s                # time to measure changes

Info

The schema is always needed here, to find the correct table without the use of a search patch.

Result values are:

  • size - size of complete table together with index in bytes
  • size_table - size of table in bytes
  • size_data - size of all data elements in table in bytes
  • size_free - size of all free data within table in bytes
  • size_index - size of all indices in bytes
  • tables_unused - 1 if table is unused, else 0
  • rows - estimated number of data rows
  • dead - estimated number of dead rows
  • indexes - number of indexes
  • indexes_unused - number of unused indexes
  • scan_ratio - rato between index_scans and sequential scans (should be near 100%)

And the following values are growing statistics which are growing and have an additional *_diff entry with the difference per second:

  • inserted - number of records inserted
  • updated - number of records updated
  • deleted - number of records deleted
  • scan_seq - number of sequential scans
  • scan_index - number of index scans
  • stats_age - time to calculate diffs (not stored)

Fixes

No additional analyzation here.

Repair

No automatic fixing is possible here.

Result

Console output
   OK Postgres Table (database.postgres.table)
    Dieser Test prüft die PostgreSQL Tabelle dvb_manage.mng_media_version auf pg://operator@office.database.office.dvb:5432/dvb_manage.
    REQUEST  select * from (
                select
              pg_total_relation_size(relid) as size,
                pg_total_relation_size(relid)-pg_indexes_size(relid) as size_table,
                pg_relation_size(relid, 'main') as size_data,
                pg_relation_size(relid, 'fsm') as size_free,
                pg_indexes_size(relid) as size_index,
                n_live_tup as rows,
                n_dead_tup as dead,
              case when idx_tup_fetch + seq_tup_read = 0 then 1 else 0 end as tables_unused
              from pg_stat_user_tables where schemaname = 'dvb_manage' and relname='mng_media_version'
            ) as size, (
              select count(*)::integer as indexes,
              count(case when idx_scan = 0 then 1 end)::integer as indexes_unused
              from pg_stat_user_indexes where schemaname = 'dvb_manage' and relname='mng_media_version'
            ) as index, (
              select * from (
              select
                n_tup_ins as inserted,
                n_tup_upd as updated,
                n_tup_del as deleted,
              seq_scan as scan_seq,
              idx_scan as scan_index
              from pg_stat_user_tables where schemaname = 'dvb_manage' and relname='mng_media_version'
            ) as values, (
              select
              extract(epoch from now() - stats_reset) as stats_age -- in seconds
              from pg_stat_database where datname = current_database()
            ) as stat
            ) as dynamic;
    RETURN   {
              size: '6303563776',
              size_table: '927367168',
              size_data: '896835584',
              size_free: '237568',
              size_index: '5376196608',
              rows: '3955053',
              dead: '23934',
              tables_unused: 0,
              indexes: 12,
              indexes_unused: 1,
              inserted: '203154',
              updated: '16057870',
              deleted: '0',
              scan_seq: '839992',
              scan_index: '47841917397',
              stats_age: 13422157.735531
            }
    REQUEST  select * from (
              select
                n_tup_ins as inserted,
                n_tup_upd as updated,
                n_tup_del as deleted,
              seq_scan as scan_seq,
              idx_scan as scan_index
              from pg_stat_user_tables where schemaname = 'dvb_manage' and relname='mng_media_version'
            ) as values, (
              select
              extract(epoch from now() - stats_reset) as stats_age -- in seconds
              from pg_stat_database where datname = current_database()
            ) as stat
    RETURN   {
              inserted: '203154',
              updated: '16057870',
              deleted: '0',
              scan_seq: '839992',
              scan_index: '47841917397',
              stats_age: 13422187.896504
            }
    RESULT   {
              size: 6303563776,
              size_table: 927367168,
              size_data: 896835584,
              size_free: 237568,
              size_index: 5376196608,
              rows: 3955053,
              dead: 23934,
              tables_unused: 0,
              indexes: 12,
              indexes_unused: 1,
              inserted: 0,
              updated: 0,
              deleted: 0,
              scan_seq: 0,
              scan_index: 0,
              scan_ratio: 1
            }

postgres/space

A PostgreSQL tablespace will be checked.

Data

data:
    connection: ...                 # how to reach the database (see above)
    tablespace: name                # space to check
    measureTime: 30s                # time to measure changes

Result values are:

  • size - size of complete table together with index in bytes
  • size_table - size of table in bytes
  • size_data - size of all data elements in table in bytes
  • size_free - size of all free data within table in bytes
  • size_index - size of all indices in bytes
  • tables_unused - 1 if table is unused, else 0
  • rows - estimated number of data rows
  • dead - estimated number of dead rows
  • indexes - number of indexes
  • indexes_unused - number of unused indexes

Fixes

No additional analyzation here.

Repair

No automatic fixing is possible here.

Result

Console output
   OK Postgres Tablespace (database.postgres.space)
    Dieser Test prüft den PostgreSQL Space tbs_dvb_manage auf pg://operator@office.database.office.dvb:5432/dvb_manage.
    REQUEST  select * from (
              select
              sum(pg_total_relation_size(tables.relid)) as size,
                sum(pg_total_relation_size(tables.relid)-pg_indexes_size(tables.relid)) as size_table,
                sum(pg_relation_size(tables.relid, 'main')) as size_data,
                sum(pg_relation_size(tables.relid, 'fsm')) as size_free,
                sum(pg_indexes_size(tables.relid)) as size_index,
              count(*)::integer as tables,
              count(case when idx_tup_fetch + seq_tup_read = 0 then 1 end)::integer as tables_unused,
                sum(n_live_tup) as rows,
                sum(n_dead_tup) as dead
              from (
                select
                (ts.datname||'.'||t.schemaname||'.'||t.tablename)::regclass::oid as relid
                from pg_tables t
                cross join (
                  select d.datname, ts.spcname as default_tablespace
                  from pg_database d
                  join pg_tablespace ts on ts.oid = d.dattablespace
                  --where d.datname = current_database()
                ) ts
                where schemaname not like 'pg_%' and schemaname != 'information_schema'
                and coalesce(t.tablespace, ts.default_tablespace) = 'tbs_dvb_manage'
              ) as tables
              left join pg_stat_user_tables sut on sut.relid=tables.relid
            ) as size, (
            select
              count(*)::integer as indexes,
              count(case when idx_scan = 0 then 1 end)::integer as indexes_unused
              from (
              select
              (ts.datname||'.'||i.schemaname||'.'||i.indexname)::regclass::oid as relid
              from pg_indexes i
                cross join (
                  select d.datname, ts.spcname as default_tablespace
                  from pg_database d
                  join pg_tablespace ts on ts.oid = d.dattablespace
                  --where d.datname = current_database()
                ) ts
                where schemaname not like 'pg_%' and schemaname != 'information_schema'
                and coalesce(i.tablespace, ts.default_tablespace) = 'tbs_dvb_manage'
              ) as indexes
              left join pg_stat_user_indexes sui on sui.relid=indexes.relid
            ) as index;
    RETURN   {
              size: '496497664000',
              size_table: '212573413376',
              size_data: '211775242240',
              size_free: '56803328',
              size_index: '283924250624',
              tables: 238,
              tables_unused: 48,
              rows: '1974134584',
              dead: '11623436',
              indexes: 442,
              indexes_unused: 0
            }
    RESULT   {
              size: 496497664000,
              size_table: 212573413376,
              size_data: 211775242240,
              size_free: 56803328,
              size_index: 283924250624,
              tables: 238,
              tables_unused: 48,
              rows: 1974134584,
              dead: 11623436,
              indexes: 442,
              indexes_unused: 0
            }

postgres/user

A PostgreSQL database user will be checked.

Data

data:
    connection: ...                 # how to reach the database (see above)
    user: my_user                   # login user
    measureTime: 30s                # time to measure changes

Result values are:

  • user_ip - list of ip addresses connected
  • user_app - list of connected applications
  • conn_max - maximum number of connections
  • conn - number of current connections
  • conn_free - number of free connections to be used
  • conn_active - number of active connections
  • conn_idle - number of idle connections
  • conn_wait - number of waiting connections
  • transaction_age - age of oldest active transaction in seconds
  • query_age - age of oldest running query in seconds

Fixes

No additional analyzation here.

Repair

No automatic fixing is possible here.

Result

Console output
   OK Postgres User (database.postgres.user)
    Dieser Test prüft den PostgreSQL User operator auf pg://operator@office.database.office.dvb:5432/dvb_manage.
    REQUEST  select * from (
              select
              case
                when rolsuper = true and rolconnlimit = -1 then current_setting('max_connections')::integer
                when rolconnlimit = -1 then current_setting('max_connections')::integer - current_setting('superuser_reserved_connections')::integer
                else rolconnlimit
              end as conn_max
              FROM pg_roles WHERE rolname = 'operator'
            ) as usr, (
              select
                string_agg(distinct case when application_name = '' then null else application_name end, ', ') as app_names,
              extract(epoch from current_timestamp - min(xact_start)) as transaction_age,
              extract(epoch from current_timestamp - min(case when state = 'active' then query_start else null end)) as query_age,
              count(*)::integer as conn,
              count(case when state = 'active' then 1 end)::integer as conn_active,
              count(case when state != 'active' then 1 end)::integer as conn_idle,
              count(case when wait_event is not null and wait_event not in('ClientRead') then 1 end)::integer as conn_wait
              from pg_catalog.pg_stat_activity where usename = 'operator'
            ) as activity;
    RETURN   {
              conn_max: 300,
              app_names: 'dbvis, PostgreSQL JDBC Driver',
              transaction_age: 0,
              query_age: 0,
              conn: 10,
              conn_active: 1,
              conn_idle: 9,
              conn_wait: 0
            }
    RESULT   {
              conn_max: 300,
              app_names: [ 'dbvis', 'PostgreSQL JDBC Driver' ],
              transaction_age: 0,
              query_age: 0,
              conn: 10,
              conn_active: 1,
              conn_idle: 9,
              conn_wait: 0,
              conn_free: 290
            }

mongodb

A MongoDB server will be checked. This includes lots of health parameters.

Data

data:
    connection: ...                 # how to reach the database (see above)
    warn:
        time: 100                   # maximum connection time (ms)
        uptime: 1 h                 # minimum time since last boot
        conn: 100                   # maximum number of connections
        conn_free: 100              # minimum number of free connections
        cache_free: 10 MiB          # number of free bytes in cache
        read_latency: 100 ms        # maximum latency for read operations
        write_latency: 100 ms       # maximum latency for write operations
    #   writable: true              # should the database be writable

Result values are:

  • time - time in ms to open connection and get a ping response
  • connect - is connection possible
  • version - database server version
  • pid - process id of database server process
  • uptime - database uptime in seconds
  • warnings - number of warnings since last start
  • memory - used memory in MiB
  • conn - number of current active connections
  • conn_free - number of free, possible connections
  • cache_free - free bytes in cache
  • queue_per_client - ratio operations in queue to number of clients
  • read_latency - latency for read operations in ms
  • write_latency - latency for write operations in ms
  • is_master - is this the master server
  • read_only - is this server read only or also writable

Fixes

In case of a warning or error:

  1. The TCP connection will be checked.
  2. Ping the host.
  3. The result from the database check will be interpreted.

Repair

No automatic fixing is possible here.

Result

Console output
  OK Mongo DB (database.mongodb)
    This test will check the mongo database under [object Object].
    REQUEST  connect to mongodb://localhost:27017/alinex_server
    REQUEST  db.ping
    RETURN   { ok: 1 }
    REQUEST  db.serverStatus
    RETURN   {
              host: 'alex-notebook',
              version: '3.6.3',
              process: 'mongod',
              ...
            }
    RESULT   {
              time: 33,
              connect: true,
              version: '3.6.3',
              pid: 1061,
              uptime: 17321,
              warnings: 0,
              memory: 72,
              conn: 2,
              conn_free: 51196,
              cache_free: 7809646191,
              queue_per_lient: 0,
              read_latency: 0,
              write_latency: 0,
              is_master: true,
              read_only: false,
              collections: 3,
              views: 0,
              objects: 10,
              indexes: 8,
              size_data: 3460,
              size_storage: 86016,
              size_index: 163840
            }
    VALIDATE Check values for errors
                A map like data object. The items have the following format:
                -   time: The time to connect and get a ping response should be not more than 500 ms.
                    A numeric value. The value has to be less or equal 500.
                -   conn_free: At least 100 free connections should be available.
                    A numeric value. The value has to be greater or equal 100.
    VALIDATE Check values for warnings
                A map like data object. The items have the following format:
                -   time: The time to connect and get a ping response should be not more than 100 ms.
                    A numeric value. The value has to be less or equal 100.
                -   cache_free: At least 10 MiB of cache should be free.
                    A numeric value. The value has to be greater or equal 10485760.
                -   read_latency: The read latency should not be more than 100 ms.
                    A numeric value. The value has to be less or equal 100.
                -   write_latency: The write latency should not be more than 100 ms.
                    A numeric value. The value has to be less or equal 100.