Help for Foxhound 5.0.5516a
Table of Contents [RisingRoad]
search engine by freefind | advanced |
8.1 Frequently Asked Questions 8.5 OFSS Connection String Names
The SQL Anywhere user id ADHOC and password SQL can be used to connect to the foxhound5.db database to run adhoc queries:
-c "ENG=foxhound5; DBN=f; UID=ADHOC; PWD=SQL; DRIVER=SQL Anywhere 17;"Foxhound comes with this shortcut to run adhoc queries:
All Programs - Foxhound5 - 2 Adhoc Query via ISQLHere's what the ISQL command line looks like:
"!SQLANY17!\!BIN!\dbisql.com" -c "ENG=foxhound5; DBN=f; UID=ADHOC; PWD=SQL; CON=Foxhound5-ADHOC"Here's how to change the password:
GRANT CONNECT TO ADHOC IDENTIFIED BY 'JqDCt64Kfy73';
Administrative Tip: The ADMIN user id can do everything ADHOC can do, plus more, and it can see all the same Foxhound database views and other objects that the ADHOC user id can see. For more information about the ADMIN user id see 9. Administrator Authentication and 10. Monitor Sessions API.
A separate adhoc schema database is delivered with Foxhound, containing all the views and underlying tables that are available for adhoc reporting.To see the adhoc schema, use the String tab of the Foxhound menu page to select this connection string
Foxhound 5 Adhoc Schema - autostart and connectthen click on the Display Schema button.
Note: You can't run queries on the Adhoc Schema database, it's just there to show you what the schema looks like.
You can create tables, procedures and views as well as coding SELECT statements. There are some limitations, however; here's a list of what you can and cannot do:
no CREATE EVENT no CREATE EXISTING TABLE yes CREATE FUNCTION no CREATE FUNCTION [External call] yes CREATE INDEX no CREATE MATERIALIZED VIEW yes CREATE PROCEDURE no CREATE PROCEDURE [External call] no CREATE SEQUENCE no CREATE SERVICE yes CREATE TABLE yes CREATE TEMPORARY PROCEDURE no CREATE TRIGGER no CREATE USER yes CREATE VARIABLE [Connection-scope] no CREATE VARIABLE [Database-scope] yes CREATE VIEW yes DELETE yes INSERT yes SELECT yes TRUNCATE yes UNLOAD yes UPDATENote that DELETE, INSERT, TRUNCATE and UPDATE statements are allowed, but they'll only work on tables you create, not the pre-existing Foxhound tables.
You have to use views to write your adhoc queries, not the base tables.In other words, the base tables like rroad_group_1_property_pivot are shown in the adhoc schema so you can see the foreign keys and indexes, but only the corresponding views like sample_detail can be used in your queries.
administrator User name and password for each administrator defined for the Foxhound Administrator Authentication feature. alert One row per alert. alert_cancelled One row per alert cancellation. alert_title One row for the text title corresponding to each Alert number. alert_union A UNION of all the rows in the alert, alert_cancelled, all_clear and sampling_options views to make reporting easier. alerts_criteria One row containing the Monitor Options page settings for each sampling session, plus rows for 'Factory Settings' and 'Saved Defaults'. all_clear One row per alert all-clear. authentication A single-row table containing some attributes supporting the Foxhound Administrator Authentication feature. autodropped_connection One row per connection that was dropped by the AutoDrop facility. build_number A single-row table containing some attributes of the current installation of the Foxhound database. connection_string One row for each connection string displayed on the String tab of the Foxhound main menu. data_upgrade_history One row for each time the data in an existing Foxhound database has been copied into a new Foxhound database as part of the installation process. email_failure One row for each time Foxhound failed in an attempt to send an Alert or other email. exception_diagnostic One row for each time Foxhound detected an error or other important event. In some cases, a single underlying error may result in two or more rows in exception_diagnostic recorded by nested exception handlers in Foxhound. exception_dump One row for each time Foxhound stored extra internal diagnostic information associated with an event recorded in exception_diagnostic. expiry_date A single-row table containing some attributes of the current installation of the Foxhound database. flag_title One row for the text title corresponding to each connection flag number. global_options A single-row table containing some options affecting how Foxhound behaves. logged_administrator Audit trail containing before and/or after row images of rroad_administrator INSERT, UPDATE and DELETE operations. logged_authentication Audit trail containing before and/or after row images of rroad_authentication INSERT, UPDATE and DELETE operations. long_varchar One row for each value that was stored separately by Foxhound's internal string de-duplication process. OFSS_subject_settings_view One row for each OFSS subject database that has had one of more OFSS batch files loaded into Foxhound. patch One row for each patch file that was processed when Foxhound was started. peaks One row for each Foxhound Monitor target database, holding various peak values and links to the corresponding samples. ping_log One row for each execution of Foxhound's custom ping process. purge_run One row for each run of the Foxhound database purge process, holding information about the progress of the current run and the work accomplished by previous runs. purge_schedule_day_entry One row for each day in the purge schedule defined on the Foxhound Options page. Note that no separate "purge_schedule" view exists because only one purge schedule is defined. purge_schedule_period_entry One row for each 15-minute period in one day in the purge schedule defined on the Foxhound Options page. Note that no separate "purge_schedule" view exists because only one purge schedule is defined. sample_detail One row for each sample recorded by the Foxhound Database Monitor, holding various additional server and database-level properties and computed columns. sample_connection One row for each connection recorded for each Foxhound Database Monitor sample, holding various connection-level properties and computed columns. sample_header One row for each sample recorded by the Foxhound Database Monitor, holding various server and database-level properties and computed columns. sampling_options One row for each Foxhound Monitor target database, holding various options affecting how the Monitor behaves and information about the current status of the Monitor session. schedule One row for each week-long schedule defined on the Monitor Options page. schedule_day_entry One row for each day in one schedule defined on the Monitor Options page. schedule_period_entry One row for each 15-minute period in one day in one schedule defined on the Monitor Options page. serial_number A single-row table containing the Foxhound serial number as shown on the About page. server_message One row for each server message returned by a call to sa_server_messages(). session_options One row for each HTTP session established during the current execution of the Foxhound database.
Foxhound uses a variety of SQL functions to reformat performance statistics for readability, and some of those functions are available for use in adhoc queries.Some of these functions are used in the example Compute the "Totals:" line removed from the Monitor and History pages.
CREATE FUNCTION rroad_f_bytes_as_kmg CREATE FUNCTION rroad_f_cpu_percentage_string
CREATE FUNCTION rroad_f_limit_string_length_with_ellipsis
CREATE FUNCTION rroad_f_msecs_as_abbreviated_d_h_m_s_ms
The rroad_f_bytes_as_kmg function reduces large byte values to more readable multiples of k, M, G and T.Foxhound uses this function for many displayed values like Throughput... Bytes
CREATE FUNCTION rroad_f_bytes_as_kmg ( IN @p_bytes UNSIGNED BIGINT, IN @rounding_digit INTEGER DEFAULT 1 ) -- this should be either 0 or 1 RETURNS VARCHAR ( 200 ) rroad_f_bytes_as_kmg ( 9223372036854775807 ) 8,388,608T rroad_f_bytes_as_kmg ( 1057505280 ) 1,009M rroad_f_bytes_as_kmg ( 1024000 ) 1,000k rroad_f_bytes_as_kmg ( 1048576000 ) 1,000M rroad_f_bytes_as_kmg ( 1073741824000 ) 1,000G rroad_f_bytes_as_kmg ( 1099511627776000 ) 1,000T rroad_f_bytes_as_kmg ( 1124100 ) 1.1M rroad_f_bytes_as_kmg ( 1148678400 ) 1.1G rroad_f_bytes_as_kmg ( 1173846681600 ) 1.1T rroad_f_bytes_as_kmg ( 1124100, 0 ) 1M rroad_f_bytes_as_kmg ( 1148678400, 0 ) 1G rroad_f_bytes_as_kmg ( 1173846681600, 0 ) 1T
The rroad_f_cpu_percentage_string function calculates the readable CPU usage percentage in a given time interval, taking into account the number of CPUs being used. It also limits wildly out-of-bounds values to 100%, and allows for zero values to be represented as a dash.Foxhound uses this function for the CPU time display values.
CREATE FUNCTION rroad_f_cpu_percentage_string ( IN @interval_ProcessCPU DECIMAL ( 30, 6 ), IN @interval_msec BIGINT, IN @CPU_count INTEGER, IN @show_zero_as_dash VARCHAR ( 1 ) ) RETURNS VARCHAR ( 10 ) rroad_f_cpu_percentage_string ( 999.0, 2000, 1, 'N' ) 100% rroad_f_cpu_percentage_string ( 2.0, 2000, 1, 'N' ) 100% rroad_f_cpu_percentage_string ( 1.99, 2000, 1, 'N' ) 99.5% rroad_f_cpu_percentage_string ( 1.0, 2000, 2, 'N' ) 25.0% rroad_f_cpu_percentage_string ( 0.0, 2000, 2, 'N' ) 0% rroad_f_cpu_percentage_string ( 0.0, 2000, 2, 'Y' ) - rroad_f_cpu_percentage_string ( 0.01, 2000, 2, 'N' ) .3%
The rroad_f_limit_string_length_with_ellipsis function strips some line break and tab characters and then limits the length of a character string by replacing excess characters with an ellipsis "..." in the center of the string or at the right end (the default).Foxhound uses this function in a few locations to cram wide strings into narrow spaces.
CREATE FUNCTION rroad_f_limit_string_length_with_ellipsis ( IN @input_string LONG VARCHAR, IN @limit BIGINT, IN @ellipsis_location VARCHAR ( 10 ) DEFAULT 'right' ) -- or 'center' RETURNS LONG VARCHAR rroad_f_limit_string_length_with_ellipsis ( 'Short title', 11 ) Short title rroad_f_limit_string_length_with_ellipsis ( 'A very much longer title', 11, 'center' ) A ve...itle rroad_f_limit_string_length_with_ellipsis ( 'Short title', 11, 'center' ) Short title rroad_f_limit_string_length_with_ellipsis ( 'A very much longer title', 11 ) A very m...
The rroad_f_msecs_as_abbreviated_d_h_m_s_ms.sql function reformats large millisecond interval values into readable strings using the unit abbreviations d, h, m, s and ms.Foxhound uses this function for many displayed values like Response... Heartbeat, Sample, Ping.
CREATE FUNCTION rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( IN @msecs BIGINT ) RETURNS VARCHAR ( 20 ) rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 9590400000 ) 111d rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 1028160000 ) 11.9d rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 169200000 ) 1d 23h rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 120240000 ) 1d 9.4h rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 43020000 ) 12h rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 42984000 ) 11.9h rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 7140000 ) 1h 59m rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 714000 ) 11.9m rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 71000 ) 1m 11s rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 11940 ) 11.9s rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 1950 ) 2s rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 1940 ) 1.9s rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 149 ) 149ms rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( 1 ) 1ms
The rroad_f_number_with_commas function inserts commas in the integer portion of a number. Leading zeros are implicitly stripped from a numeric argument but not if the argument is already a string.Foxhound uses this function for many displayed values like Throughput... Req, Commits.
CREATE FUNCTION rroad_f_number_with_commas ( IN @p_number_string VARCHAR ( 100 ) ) RETURNS VARCHAR ( 200 ) rroad_f_number_with_commas ( 1234567.8901234 ) 1,234,567.8901234 rroad_f_number_with_commas ( 00000001111111 ) 1,111,111 rroad_f_number_with_commas ( '00000001111111' ) 00,000,001,111,111
The rroad_f_decimal_with_commas function does more than insert commas in large numbers, it replaces zero with a dash, limits the precision of fractional numbers and reduces tiny fractions to zero.Foxhound uses this function for some displayed values like Disk/Cache: Internal Index, Leaf, Table.
CREATE FUNCTION rroad_f_decimal_with_commas ( IN @decimal DECIMAL ( 30, 6 ), IN @round_to_non_zero_digits INTEGER DEFAULT 2 ) -- 1 or 2 RETURNS VARCHAR ( 200 ) rroad_f_decimal_with_commas ( NULL ) - rroad_f_decimal_with_commas ( 0.0 ) - rroad_f_decimal_with_commas ( 0.19 ) .19 rroad_f_decimal_with_commas ( 1.29 ) 1.3 rroad_f_decimal_with_commas ( 123456789 ) 123,456,789 rroad_f_decimal_with_commas ( 123.9 ) 124 rroad_f_decimal_with_commas ( 12.9 ) 13 rroad_f_decimal_with_commas ( 1.29 ) 1.3 rroad_f_decimal_with_commas ( 0.19 ) .19 rroad_f_decimal_with_commas ( 0.019 ) .019 rroad_f_decimal_with_commas ( 0.0019 ) .0019 rroad_f_decimal_with_commas ( 0.00019 ) 0 rroad_f_decimal_with_commas ( 0.000019 ) 0 rroad_f_decimal_with_commas ( NULL, 1 ) - rroad_f_decimal_with_commas ( 0.0, 1 ) - rroad_f_decimal_with_commas ( 0.19, 1 ) .2 rroad_f_decimal_with_commas ( 1.29, 1 ) 1.3 rroad_f_decimal_with_commas ( 123456789, 1 ) 123,456,789 rroad_f_decimal_with_commas ( 123.9, 1 ) 124 rroad_f_decimal_with_commas ( 12.9, 1 ) 13 rroad_f_decimal_with_commas ( 1.29, 1 ) 1.3 rroad_f_decimal_with_commas ( 0.19, 1 ) .2 rroad_f_decimal_with_commas ( 0.019, 1 ) .02 rroad_f_decimal_with_commas ( 0.0019, 1 ) .002 rroad_f_decimal_with_commas ( 0.00019, 1 ) 0 rroad_f_decimal_with_commas ( 0.000019, 1 ) 0
Active alerts for each target database
Alert #1 Database unresponsive activity for a particular target database
Alert history
Audit trail of changes to the rroad_administrator table
Audit trail of changes to the rroad_authentication table
Averages and maximums by hour
Blocked connection samples
Busy SQL statements
CPU seconds by user
CPU usage by user id
CPU usage for intra-query parallelism child connections
Checkpoints and checkpoint urgency
Compute the "Totals:" line removed from the Monitor and History pages
Connection-level temporary space usage
Count active alerts for each target database
Count current request status values
Count intra-query parallelism
Count rows remaining in Foxhound tables that are subject to the purge process
Count samples by CPU % ranges
Count simultaneous high CPU connections
CurrentCacheSize: Min/Max and History
Currently blocked connections for each target database
Database growth
Display the Foxhound Administrator Authentication attributes
External client connections per hour
Find "Go to:" sample set numbers for Alert #1 / All Clear/Cancelled pairs at least 60 seconds apart
Find when connections became blocked
First and last successful sample timestamps across all target databases
First and last successful sample timestamps for each target database
High-CPU queries with HTML formatting
High connection-level CPU usage with HTML formatting
List all the Foxhound Administrator Authentication users
List all the patches that have been applied to the Foxhound database
Long-running queries
Long-running queries with HTML formatting
Number of samples for each target database
Peak average sample rates
Recent purge runs
Recent sample headers and details for each target database
Recent sample headers, details and connections for each target database
Recent sample headers for each target database
Response and throughput with 100-sample moving averages
Response time volatility and connection churn
Sample details selected by exact primary key values
Sample header, details and connections selected by exact primary key values
Sample sessions for all target databases
Selected columns from the most recent purge runs
Server Messages and Alerts for High Availability and Read-Only Scale-Out Targets
Server and database averages
Show correlation between Alerts and Intra-Query Parallelism
Some AND all the connection data for one connection
Some AND all the connection data for one connection, up to and including a specific sample timestamp
Study three successive samples
Sudden impact connections
Temporary file high tide
Unpivot (normalize) sample_connection.flags_set into flag_number and flag_title columns
Very-high-CPU connection samples
-- The CAST() and rroad_f_limit_string_length_with_ellipsis() functions are used -- together to limit the width of the Target Database column. SELECT CAST ( rroad_f_limit_string_length_with_ellipsis ( STRING ( sampling_options.sampling_id, ' - ', sampling_options.selected_name, IF sampling_options.selected_tab = '1' THEN ' (DSN)' ELSE '' END IF ), 50, 'right' ) AS VARCHAR ( 50 ) ) AS "Target Database", alert.* FROM sampling_options INNER JOIN alert ON alert.sampling_id = sampling_options.sampling_id WHERE alert.alert_is_clear_or_cancelled = 'N' ORDER BY sampling_options.sampling_id ASC, alert.alert_in_effect_at DESC;
SELECT CAST ( STRING ( sampling_options.sampling_id, ' - ', sampling_options.selected_name, IF sampling_options.selected_tab = '1' THEN ' (DSN)' ELSE '' END IF ) AS VARCHAR ( 50 ) ) AS "Target Database", CAST ( STRING ( alert.alert_number, ' ', alert_title.alert_title ) AS VARCHAR ( 24 ) ) AS "Alert", DATEFORMAT ( alert.alert_in_effect_at, 'yyyy-mm-dd hh:nn:ss.sss' ) AS "From", IF alert.alert_is_clear_or_cancelled = 'N' THEN '' ELSE DATEFORMAT ( COALESCE ( all_clear.alert_all_clear_at, alert_cancelled.alert_all_clear_at ), 'yyyy-mm-dd hh:nn:ss.sss' ) END IF AS "To", IF alert.alert_is_clear_or_cancelled = 'N' THEN '' ELSE rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( DATEDIFF ( MILLISECOND, alert.alert_in_effect_at, COALESCE ( all_clear.alert_all_clear_at, alert_cancelled.alert_all_clear_at ) ) ) END IF AS "Duration" FROM sampling_options INNER JOIN alert ON alert.sampling_id = sampling_options.sampling_id LEFT OUTER JOIN all_clear ON all_clear.sampling_id = sampling_options.sampling_id AND all_clear.alert_number = alert.alert_number AND all_clear.alert_in_effect_at = alert.alert_in_effect_at LEFT OUTER JOIN alert_cancelled ON alert_cancelled.sampling_id = sampling_options.sampling_id AND alert_cancelled.alert_number = alert.alert_number AND alert_cancelled.alert_in_effect_at = alert.alert_in_effect_at INNER JOIN alert_title ON alert.alert_number = alert_title.alert_number WHERE sampling_options.sampling_id = 1 -- CHANGE THIS IF NECESSARY AND alert.alert_number = 1 ORDER BY "From" DESC, "Target Database", "Alert"; Target Database Alert From To Duration -------------------------------------------------- ------------------------ ----------------------- ----------------------- -------------------- 6 - RuralFinds_Bonnie-inspiron (DSN) 1 Database unresponsive 2018-12-05 10:55:09.633 6 - RuralFinds_Bonnie-inspiron (DSN) 1 Database unresponsive 2018-12-04 13:21:59.670 2018-12-04 17:15:37.251 3h 54m 6 - RuralFinds_Bonnie-inspiron (DSN) 1 Database unresponsive 2018-12-03 15:02:16.194 2018-12-03 16:33:40.483 1h 31m 6 - RuralFinds_Bonnie-inspiron (DSN) 1 Database unresponsive 2018-12-03 07:35:39.594 2018-12-03 10:29:04.782 2h 53m 6 - RuralFinds_Bonnie-inspiron (DSN) 1 Database unresponsive 2018-10-02 07:28:19.662 2018-10-02 08:30:37.967 1h 2.3m
-- History of Active and Clear/Cancelled Alerts in reverse order by timestamp. -- Copy and paste the "Active At" value into the "Go to:" field on the Sample History page -- to scroll to the Alert message. -- Alert history SELECT DATEFORMAT ( alert.alert_in_effect_at, 'yyyy-mm-dd hh:nn:ss.sss' ) AS "Active At", sampling_options.sampling_id AS "ID", CAST ( STRING ( alert.alert_number, ' ', alert_title.alert_title ) AS VARCHAR ( 24 ) ) AS "Alert", IF alert.alert_is_clear_or_cancelled = 'N' THEN 'Still Active' ELSE STRING ( 'Clear/Cancelled at ', DATEFORMAT ( COALESCE ( all_clear.alert_all_clear_at, alert_cancelled.alert_all_clear_at ), 'yyyy-mm-dd hh:nn:ss.sss' ) ) END IF AS "Status" FROM sampling_options INNER JOIN alert ON alert.sampling_id = sampling_options.sampling_id LEFT OUTER JOIN all_clear ON all_clear.sampling_id = sampling_options.sampling_id AND all_clear.alert_number = alert.alert_number AND all_clear.alert_in_effect_at = alert.alert_in_effect_at LEFT OUTER JOIN alert_cancelled ON alert_cancelled.sampling_id = sampling_options.sampling_id AND alert_cancelled.alert_number = alert.alert_number AND alert_cancelled.alert_in_effect_at = alert.alert_in_effect_at INNER JOIN alert_title ON alert.alert_number = alert_title.alert_number ORDER BY "Active At" DESC, "ID", "Alert"; Active At ID Alert Status ----------------------- ---------- ------------------------ ------------------------------------------ 2018-12-04 14:04:54.824 7 1 Database unresponsive Still Active 2018-12-04 13:21:59.670 6 19 Cache size Still Active 2018-12-03 15:02:16.194 6 19 Cache size Clear/Cancelled at 2018-12-03 16:33:40.483 2018-12-03 07:35:39.594 6 19 Cache size Clear/Cancelled at 2018-12-03 10:29:04.782 2018-10-02 07:28:19.662 6 19 Cache size Clear/Cancelled at 2018-10-02 08:30:37.967 2018-07-24 15:49:59.508 6 19 Cache size Clear/Cancelled at 2018-07-24 16:57:53.116 2018-06-27 16:44:19.151 6 1 Database unresponsive Clear/Cancelled at 2018-06-27 16:44:29.779 2018-06-27 11:23:40.378 6 19 Cache size Clear/Cancelled at 2018-06-27 16:57:01.119 2018-06-22 09:09:16.288 6 28 Long transaction Clear/Cancelled at 2018-06-22 09:10:56.061 2018-06-22 09:08:56.698 6 16 I/O operations Clear/Cancelled at 2018-06-22 09:11:06.047 2018-06-22 09:04:15.302 6 19 Cache size Clear/Cancelled at 2018-06-22 10:48:24.075 2018-05-16 04:08:40.587 5 28 Long transaction Clear/Cancelled at 2018-05-16 04:24:22.072
-- Changes made via SQL Anywhere ADMIN user id. UPDATE rroad_administrator SET password_hash = HASH ('Hello', 'SHA256' ) WHERE user_name = 'admin'; COMMIT; UPDATE rroad_administrator SET password_hash = HASH ('World', 'SHA256' ) WHERE user_name = 'admin'; COMMIT; -- Audit trail query via SQL Anywhere ADHOC user id. SELECT * FROM logged_administrator ORDER BY log_id; log_id, logged_action, logged_at, logged_by, user_name, password_hash 1, 'before UPDATE', 2019-10-18 14:01:37.85, 'ADMIN', 'admin', 'a7056a455639d1c7deec82ee787db24a0c1878e2792b4597709f0facf7cc7b35' 2, 'after UPDATE', 2019-10-18 14:01:37.85, 'ADMIN', 'admin', '185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969' 3, 'before UPDATE', 2019-10-18 14:01:37.853, 'ADMIN', 'admin', '185f8db32271fe25f561a6fc938b2e264306ec304eda518007d1764826381969' 4, 'after UPDATE', 2019-10-18 14:01:37.853, 'ADMIN', 'admin', '78ae647dc5544d227130a0682a51e30bc7777fbb6d8a8f17007463a3ecd1d524'
-- Changes made via SQL Anywhere ADMIN user id. UPDATE rroad_authentication SET authentication_enabled = 'Y'; COMMIT; -- Audit trail query via SQL Anywhere ADHOC user id. SELECT * FROM logged_authentication ORDER BY log_id; log_id, logged_action, logged_at, logged_by, artificial_id, authentication_enabled, user_name, cookie_max_age_seconds, session_timeout_minutes, session_id, debug_messages 1, 'before UPDATE', 2019-10-18 14:28:31.119, 'ADMIN', 1, 'N', (NULL), 43200, 720, (NULL), 'OFF' 2, 'after UPDATE', 2019-10-18 14:28:31.119, 'ADMIN', 1, 'Y', (NULL), 43200, 720, (NULL), 'OFF'
SELECT RIGHT ( STRING ( '0', HOUR ( sample_detail.sample_recorded_at ), ':00' ), 5 ) AS "Hour", CAST ( AVG ( sample_detail.interval_CPU_percent ) AS DECIMAL ( 5, 1 ) ) AS "Avg CPU %", CAST ( MAX ( sample_detail.executing_connection_count ) AS INTEGER ) AS "Max Exec Conns", CAST ( MAX ( sample_detail.UnschReq ) AS INTEGER ) AS "Max UnschReq", CAST ( MAX ( sample_detail.total_child_connection_count ) AS INTEGER ) AS "Max Child Conns" FROM sample_detail WHERE sample_detail.sampling_id = 3 -- CHANGE THIS IF NECESSARY AND sample_detail.interval_CPU_percent IS NOT NULL GROUP BY "Hour" ORDER BY "Hour"; Hour Avg CPU % Max Exec Conns Max UnschReq Max Child Conns ----- --------- -------------- ------------ --------------- 00:00 2.7 11 1 8 01:00 1.4 10 0 8 02:00 1.3 3 1 2 03:00 7.1 12 0 8 04:00 1.2 6 0 4 05:00 0.9 10 0 8 06:00 1.8 11 1 8 07:00 5.2 12 2 8 08:00 13.8 20 2 16 09:00 17.7 16 8 8 10:00 18.5 15 9 8 11:00 21.0 17 5 8 12:00 17.8 20 2 8 13:00 18.2 16 5 8 14:00 22.0 17 3 0 15:00 22.5 15 7 6 16:00 18.8 13 4 8 17:00 9.3 11 1 8 18:00 5.6 12 1 8 19:00 3.6 11 1 8 20:00 5.1 12 1 8 21:00 6.6 13 1 8 22:00 3.9 21 1 16 23:00 2.1 3 1 0
SELECT TOP 1000 sample_connection.sampling_id, IF sampling_options.selected_tab = 1 THEN 'DSN' ELSE 'String' END IF AS connection_type, sampling_options.selected_name AS target_database, sample_connection.sample_set_number, sample_header.sample_finished_at AS sample_recorded_at, sample_connection.connection_number AS blocked_connection_number, sample_connection.BlockedOn AS blocked_by_connection_number, sample_connection.LastReqTime, sample_connection.LastStatement, sample_connection.blocker_reason AS reason_for_block, sample_connection.blocker_owner_name AS owner_name, sample_connection.blocker_table_name AS table_name, sample_connection.blocker_row_identifier AS row_identifier FROM sampling_options INNER JOIN sample_connection ON sample_connection.sampling_id = sampling_options.sampling_id INNER JOIN sample_header ON sample_header.sampling_id = sampling_options.sampling_id AND sample_header.sample_set_number = sample_connection.sample_set_number WHERE sample_connection.BlockedOn <> 0 ORDER BY sample_connection.sample_set_number DESC, sample_connection.connection_number ASC;
WITH averages AS ( SELECT sample_connection.sampling_id, sampling_options.selected_name AS target_database, sample_connection.LoginTime, sample_connection.connection_number, sample_connection.Name AS connection_name, sample_connection.Userid, MAX ( sample_connection.busy_percent ) AS busy_percent, sample_connection.LastStatement, sample_connection.LastPlanText FROM sampling_options INNER JOIN sample_connection ON sample_connection.sampling_id = sampling_options.sampling_id WHERE sample_connection.busy_percent >= 5 AND sample_connection.LastStatement <> '' AND sample_connection.Name NOT LIKE 'Foxhound%' GROUP BY sample_connection.sampling_id, sampling_options.selected_name, sample_connection.LoginTime, sample_connection.connection_number, sample_connection.Name, sample_connection.Userid, sample_connection.LastStatement, sample_connection.LastPlanText ) SELECT TOP 10 * FROM averages ORDER BY averages.busy_percent DESC, averages.sampling_id, averages.LoginTime, averages.connection_number; sampling_id, target_database, LoginTime, connection_number, connection_name, Userid, busy_percent, LastStatement, LastPlanText 1, 'xxx', 2016-02-18 21:00:17..., 26097, 'SQL_DBC_7f06790', 'xxx', 99, select ..., 1, 'xxx', 2016-02-18 18:27:28..., 23809, 'SQL_DBC_2dbee78' ,'yyy', 96, select ..., ( Plan [ (B) ...
-- The Checkpoint Urgency percentage rises until a Checkpoint is taken. -- The LATERAL clause retrieves the CheckpointUrgency value from the sample_detail row -- that immediately precedes the row with the non-zero Chkpt value; i.e., it retrieves the -- highest value CheckpointUrgency reached before it fell back to zero after the CHECKPOINT. SELECT TOP 10 sample_detail.sample_set_number AS "Sample Set Number", sample_detail.sample_recorded_at AS "Sample Recorded At", sample_detail.interval_Chkpt AS "Checkpoints", previous_sample_detail.CheckpointUrgency AS "Checkpoint Urgency %" FROM sample_detail, LATERAL ( SELECT TOP 1 previous_sample_detail.CheckpointUrgency FROM sample_detail AS previous_sample_detail WHERE previous_sample_detail.sample_set_number < sample_detail.sample_set_number ORDER BY previous_sample_detail.sample_set_number DESC ) AS previous_sample_detail WHERE sample_detail.interval_Chkpt > 0 ORDER BY sample_detail.sample_set_number DESC; Sample Set Number Sample Recorded At Checkpoints Checkpoint Urgency % -------------------- ----------------------- -------------------- -------------------- 2507576 2018-12-07 15:55:07.114 3 8 2507326 2018-12-07 15:50:02.953 3 8 2507079 2018-12-07 15:45:08.339 3 7 2506832 2018-12-07 15:40:16.319 1 8 2506824 2018-12-07 15:40:05.616 2 8 2506577 2018-12-07 15:35:06.111 3 8 2506335 2018-12-07 15:30:18.935 3 8 2506086 2018-12-07 15:25:03.571 3 8 2505838 2018-12-07 15:20:12.942 3 8 2505584 2018-12-07 15:15:06.125 3 8
Earlier versions of Foxhound displayed a "Totals:" line in the "Top Sample" sections of the Monitor, Sample History and Connection History pages.That line has been removed from Foxhound 5 to reduce clutter, but the values are available using the following adhoc query:
SELECT 'Totals:' AS "Totals:", rroad_f_number_with_commas ( sample_detail.Req ) AS "Throughput... Req", rroad_f_number_with_commas ( sample_detail."Commit" ) AS "Throughput... Commits", rroad_f_bytes_as_kmg ( sample_detail.BytesReceived + sample_detail.BytesSent ) AS "Throughput... Bytes", rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( CAST ( sample_detail.ProcessCPU * 1000.0 AS BIGINT ) ) AS "CPU", rroad_f_cpu_percentage_string ( sample_detail.ProcessCPU, DATEDIFF ( MILLISECOND, sample_detail.StartTime, sample_detail.sample_recorded_at ), sample_detail.CPU_count, 'N' ) AS "CPU Time % av", rroad_f_number_with_commas ( sample_detail.CachePanics ) AS "Cache Panics", rroad_f_number_with_commas ( sample_detail.QueryLowMemoryStrategy ) AS "Cache Low Memory", rroad_f_number_with_commas ( CAST ( ROUND ( CAST ( sample_detail.CacheHits AS DECIMAL ( 30, 6 ) ) / CAST ( GREATER ( 1, sample_detail.CacheRead ) AS DECIMAL ( 30, 6 ) ) * 100.0, 2 ) AS DECIMAL ( 30, 2 ) ) ) AS "Cache Satisfaction % av", rroad_f_number_with_commas ( sample_detail.DiskRead ) AS "Disk Reads", rroad_f_number_with_commas ( sample_detail.DiskWrite ) AS "Disk Writes", rroad_f_number_with_commas ( sample_detail.LogWrite ) AS "Log Writes", rroad_f_number_with_commas ( sample_detail.IndAdd ) AS "Index Adds", rroad_f_number_with_commas ( sample_detail.IndLookup ) AS "Index Lookups", rroad_f_number_with_commas ( CAST ( ROUND ( CAST ( sample_detail.IndLookup - LESSER ( sample_detail.IndLookup, sample_detail.FullCompare ) AS DECIMAL ( 30, 6 ) ) / CAST ( GREATER ( 1, sample_detail.IndLookup ) AS DECIMAL ( 30, 6 ) ) * 100.0, 0 ) AS DECIMAL ( 30, 0 ) ) ) AS "Index Satisfaction % av", rroad_f_number_with_commas ( sample_detail.FullCompare ) AS "Full Index Comps" FROM sample_detail WHERE sample_detail.sampling_id = 3 -- CHANGE THIS IF NECESSARY AND sample_detail.sample_set_number = 2507317; Totals:,Throughput... Req,Throughput... Commits,Throughput... Bytes,CPU,CPU Time % av,Cache Panics,Cache Low Memory,Cache Satisfaction % av, Disk Reads,Disk Writes,Log Writes,Index Adds,Index Lookups,Index Satisfaction % av,Full Index Comps 'Totals:','40,298,585','1,008,194','4G','1d 20h','9.2%','0','0','99.95', '1,048,977','1,165,075','4,273','28,474,218','18,584,630','100','0'
-- The WITH max_temp view calculates the maximum temporary space usage for each connection. -- The outer SELECT gathers more data for the associated connection sample. WITH max_temp AS ( SELECT sample_connection.sampling_id, sample_connection.connection_id_string AS "Conn Id", MAX ( sample_connection.TempFilePages ) AS "MAX Temp Pages" FROM sample_connection GROUP BY sample_connection.sampling_id, sample_connection.connection_id_string ) SELECT TOP 15 CAST ( DATEFORMAT ( MIN ( sample_detail.sample_recorded_at ), 'yyyy-mm-dd hh:nn:ss' ) AS VARCHAR ( 20 ) ) AS "High Tide At", CAST ( rroad_f_bytes_as_kmg ( max_temp."MAX Temp Pages" * sample_detail.page_size ) AS VARCHAR ( 5 ) ) AS "Temp", CAST ( sample_connection.connection_number AS DECIMAL ( 11 ) ) AS "Conn #", CAST ( LEFT ( STRING ( sample_connection.Userid, ' - ', sample_connection.Name ), 25 ) AS VARCHAR ( 25 ) ) AS "User - Conn Name", CAST ( LEFT ( LTRIM ( REPLACE ( REPLACE ( REPLACE ( COALESCE ( sample_connection.LastStatement, '' ), '\X0A', ' ' ), '\X0D', ' ' ), '\X09', ' ' ) ), 30 ) AS VARCHAR ( 30) ) AS "LastStatement" FROM sample_detail INNER JOIN sample_connection ON sample_connection.sample_set_number = sample_detail.sample_set_number INNER JOIN max_temp ON max_temp.sampling_id = sample_detail.sampling_id AND max_temp."Conn Id" = sample_connection.connection_id_string AND max_temp."MAX Temp Pages" = sample_connection.TempFilePages WHERE sample_detail.sampling_id = 3 -- CHANGE THIS IF NECESSARY GROUP BY max_temp."MAX Temp Pages", sample_detail.page_size, sample_connection.connection_number, sample_connection.Userid, sample_connection.Name, COALESCE ( sample_connection.LastStatement, '' ) ORDER BY "MAX Temp Pages" DESC, "High Tide At"; High Tide At Temp Conn # User - Conn Name LastStatement -------------------- ----- ------------- ------------------------- ------------------------------ 2018-12-06 22:12:55 156M 1140218 ACCTadm - SQL_DBC_28d3a8f 2018-12-04 22:16:30 155M 9154214 ACCTadm - SQL_DBC_28dbf79 2018-12-03 22:22:43 155M 8605914 ACCTadm - SQL_DBC_2878fb6 2018-12-05 09:39:16 64M 1000014664 - INT: Exchange 2018-12-05 09:39:16 64M 1000014666 - INT: Exchange 2018-12-05 09:39:16 64M 1000014662 - INT: Exchange 2018-12-05 09:39:16 64M 1000014661 - INT: Exchange 2018-12-05 09:39:16 64M 1000014667 - INT: Exchange 2018-12-05 09:39:16 64M 1000014663 - INT: Exchange 2018-12-05 09:39:16 64M 1000014668 - INT: Exchange 2018-12-05 09:39:16 64M 1000014665 - INT: Exchange 2018-12-04 12:27:53 37M 8847564 genAdm - SQL_DBC_1c4f23 select "genkey", "report_n 2018-12-07 00:07:53 33M 1147527 DBA - SQL_DBC_1694488 call "dba"."p_search_all_repor 2018-12-03 16:39:21 33M 8514551 genAdm - SQL_DBC_1a57d2 select "k"."type", "k"."se 2018-12-05 09:46:32 30M 121738 genAdm - SQL_DBC_e3b698 select "genkey", "report_n
SELECT CAST ( STRING ( sampling_options.sampling_id, ' - ', sampling_options.selected_name, IF sampling_options.selected_tab = '1' THEN ' (DSN)' ELSE '' END IF ) AS VARCHAR ( 50 ) ) AS "Target Database", COUNT(*) AS "Active Alerts" FROM sampling_options INNER JOIN alert ON alert.sampling_id = sampling_options.sampling_id WHERE alert.alert_is_clear_or_cancelled = 'N' GROUP BY sampling_options.sampling_id, sampling_options.selected_name, sampling_options.selected_tab ORDER BY sampling_options.sampling_id ASC; Target Database Active Alerts -------------------------------------------------- ------------- 6 - Inventory16-inspiron (DSN) 1 7 - ddd17 (DSN) 1
WITH status_counts AS ( SELECT CAST ( COUNT(*) AS DECIMAL ( 30, 6 ) ) AS sample_count, sample_connection.current_req_status AS current_req_status FROM sample_connection GROUP BY sample_connection.current_req_status ), total AS ( SELECT SUM ( status_counts.sample_count ) AS sample_count FROM status_counts ) SELECT CAST ( status_counts.sample_count AS INTEGER ) AS "Sample Count", CAST ( ( status_counts.sample_count / total.sample_count ) * 100.0 AS DECIMAL ( 11, 2 ) ) AS "Percentage", status_counts.current_req_status AS "Current Req Status" FROM status_counts, total ORDER BY "Percentage" DESC; Sample Count Percentage Current Req Status ------------ ------------- ---------------------------------------------------------------------------------------------------- 2578952 97.16 Idle 60814 2.29 Executing 12028 0.45 Blocked by lock 1224 0.05 Waiting for thread 883 0.03 - 554 0.02 Waiting for shared resource 35 0.00 Waiting for I/O
-- Count connection samples with at least one child connection. SELECT TOP 10 DATEFORMAT ( sample_detail.sample_recorded_at, 'yyyy mm dd hh:nn:ss.ssssss' ) AS "Sample Recorded At", COUNT(*) AS "Child Connection Count" FROM sample_detail INNER JOIN sample_connection ON sample_connection.sample_set_number = sample_detail.sample_set_number WHERE sample_connection.sampling_id = 3 -- CHANGE THIS IF NECESSARY AND COALESCE ( sample_connection.ParentConnection, 0 ) <> 0 GROUP BY sample_detail.sample_recorded_at HAVING "Child Connection Count" >= 1 ORDER BY "Child Connection Count" DESC, "Sample Recorded At" ASC; Sample Recorded At Child Connection Count -------------------------- ---------------------- 2018 12 04 08:01:21.371000 16 2018 12 04 08:01:31.372000 16 2018 12 04 22:15:41.462000 16 2018 12 03 11:53:35.872000 8 2018 12 03 12:00:14.597000 8 2018 12 03 12:05:33.289000 8 2018 12 03 13:25:14.782000 8 2018 12 03 13:44:16.799000 8 2018 12 03 16:32:28.815000 8 2018 12 03 17:10:56.788000 8
CHECKPOINT; -- force the SYSTAB."COUNT" column to be up-to-date; this statement is permitted by Start - Foxhound5 - 11 Admin Update via ISQL SELECT "COUNT" AS "rows", 'rroad_alert' AS "table" FROM SYSTAB WHERE table_name = 'rroad_alert' UNION SELECT "COUNT" AS "rows", 'rroad_alert_cancelled' AS "table" FROM SYSTAB WHERE table_name = 'rroad_alert_cancelled' UNION SELECT "COUNT" AS "rows", 'rroad_alerts_criteria' AS "table" FROM SYSTAB WHERE table_name = 'rroad_alerts_criteria' UNION SELECT "COUNT" AS "rows", 'rroad_all_clear' AS "table" FROM SYSTAB WHERE table_name = 'rroad_all_clear' UNION SELECT "COUNT" AS "rows", 'rroad_autodropped_connection' AS "table" FROM SYSTAB WHERE table_name = 'rroad_autodropped_connection' UNION SELECT "COUNT" AS "rows", 'rroad_connection_string' AS "table" FROM SYSTAB WHERE table_name = 'rroad_connection_string' UNION SELECT "COUNT" AS "rows", 'rroad_email_failure' AS "table" FROM SYSTAB WHERE table_name = 'rroad_email_failure' UNION SELECT "COUNT" AS "rows", 'rroad_exception' AS "table" FROM SYSTAB WHERE table_name = 'rroad_exception' UNION SELECT "COUNT" AS "rows", 'rroad_exception_dump' AS "table" FROM SYSTAB WHERE table_name = 'rroad_exception_dump' UNION SELECT "COUNT" AS "rows", 'rroad_group_1_property_pivot' AS "table" FROM SYSTAB WHERE table_name = 'rroad_group_1_property_pivot' UNION SELECT "COUNT" AS "rows", 'rroad_group_2_property_pivot' AS "table" FROM SYSTAB WHERE table_name = 'rroad_group_2_property_pivot' UNION SELECT "COUNT" AS "rows", 'rroad_long_varchar' AS "table" FROM SYSTAB WHERE table_name = 'rroad_long_varchar' UNION SELECT "COUNT" AS "rows", 'rroad_peaks' AS "table" FROM SYSTAB WHERE table_name = 'rroad_peaks' UNION SELECT "COUNT" AS "rows", 'rroad_ping_log' AS "table" FROM SYSTAB WHERE table_name = 'rroad_ping_log' UNION SELECT "COUNT" AS "rows", 'rroad_purge_run' AS "table" FROM SYSTAB WHERE table_name = 'rroad_purge_run' UNION SELECT "COUNT" AS "rows", 'rroad_sampling_options' AS "table" FROM SYSTAB WHERE table_name = 'rroad_sampling_options' UNION SELECT "COUNT" AS "rows", 'OFSS_subject_settings' AS "table" FROM SYSTAB WHERE table_name = 'OFSS_subject_settings' UNION SELECT "COUNT" AS "rows", 'rroad_sample_set' AS "table" FROM SYSTAB WHERE table_name = 'rroad_sample_set' UNION SELECT "COUNT" AS "rows", 'rroad_schedule' AS "table" FROM SYSTAB WHERE table_name = 'rroad_schedule' UNION SELECT "COUNT" AS "rows", 'rroad_schedule_day_entry' AS "table" FROM SYSTAB WHERE table_name = 'rroad_schedule_day_entry' UNION SELECT "COUNT" AS "rows", 'rroad_schedule_period_entry' AS "table" FROM SYSTAB WHERE table_name = 'rroad_schedule_period_entry' UNION SELECT "COUNT" AS "rows", 'rroad_server_message' AS "table" FROM SYSTAB WHERE table_name = 'rroad_server_message' ORDER BY 2; rows table -------------------- ---------------------------- 0 OFSS_subject_settings 24 rroad_alert 21 rroad_alert_cancelled 13 rroad_alerts_criteria 3 rroad_all_clear 0 rroad_autodropped_connection 17 rroad_connection_string 0 rroad_email_failure 12 rroad_exception 0 rroad_exception_dump 12063 rroad_group_1_property_pivot 58886 rroad_group_2_property_pivot 41 rroad_long_varchar 11 rroad_peaks 0 rroad_ping_log 2 rroad_purge_run 12063 rroad_sample_set 11 rroad_sampling_options 52 rroad_schedule 364 rroad_schedule_day_entry 34944 rroad_schedule_period_entry 70 rroad_server_message
SELECT CASE WHEN CAST ( sample_detail.interval_CPU_percent AS INTEGER ) = 0 THEN '(l) 0%' WHEN CAST ( sample_detail.interval_CPU_percent AS INTEGER ) < 10 THEN '(k) 1% to 9%' WHEN CAST ( sample_detail.interval_CPU_percent AS INTEGER ) < 20 THEN '(j) 10% to 19%' WHEN CAST ( sample_detail.interval_CPU_percent AS INTEGER ) < 30 THEN '(i) 20% to 29%' WHEN CAST ( sample_detail.interval_CPU_percent AS INTEGER ) < 40 THEN '(h) 30% to 39%' WHEN CAST ( sample_detail.interval_CPU_percent AS INTEGER ) < 50 THEN '(g) 40% to 49%' WHEN CAST ( sample_detail.interval_CPU_percent AS INTEGER ) < 60 THEN '(f) 50% to 59%' WHEN CAST ( sample_detail.interval_CPU_percent AS INTEGER ) < 70 THEN '(e) 60% to 69%' WHEN CAST ( sample_detail.interval_CPU_percent AS INTEGER ) < 80 THEN '(d) 70% to 79%' WHEN CAST ( sample_detail.interval_CPU_percent AS INTEGER ) < 90 THEN '(c) 80% to 89%' WHEN CAST ( sample_detail.interval_CPU_percent AS INTEGER ) < 100 THEN '(b) 90% to 99%' WHEN CAST ( sample_detail.interval_CPU_percent AS INTEGER ) = 100 THEN '(a) 100%' END CASE AS "Server CPU %", COUNT(*) AS "Sample Count", CAST ( ( "Sample Count" / total."Total Sample Count" ) * 100.0 AS DECIMAL ( 7, 2 ) ) AS "Sample %" FROM sample_detail, ( SELECT CAST ( COUNT(*) AS DECIMAL ( 30, 6 ) ) AS "Total Sample Count" FROM sample_detail WHERE sample_detail.sampling_id = 3 -- CHANGE THIS IF NECESSARY AND sample_detail.sample_lost = 'N' AND sample_detail.interval_CPU_percent IS NOT NULL ) AS total WHERE sample_detail.sampling_id = 3 -- CHANGE THIS IF NECESSARY AND sample_detail.sample_lost = 'N' AND sample_detail.interval_CPU_percent IS NOT NULL GROUP BY "Server CPU %", total."Total Sample Count" ORDER BY "Server CPU %" Server CPU % Sample Count Sample % -------------- ------------ --------- (d) 70% to 79% 1 0.00 (e) 60% to 69% 1 0.00 (f) 50% to 59% 6 0.02 (g) 40% to 49% 58 0.21 (h) 30% to 39% 558 2.06 (i) 20% to 29% 4285 15.82 (j) 10% to 19% 7271 26.84 (k) 1% to 9% 7596 28.04 (l) 0% 7311 26.99
WITH high_cpu_connection_count AS ( SELECT DATEFORMAT ( sample_detail.sample_recorded_at, 'yyyy mm dd hh:nn:ss.ssssss' ) AS "Sample At", COUNT(*) AS "High CPU Conn Count", CAST ( sample_detail.interval_CPU_percent AS DECIMAL ( 5 ) ) AS "Server CPU %" FROM sample_detail INNER JOIN sample_connection ON sample_connection.sample_set_number = sample_detail.sample_set_number WHERE sample_connection.sampling_id = 3 -- CHANGE THIS IF NECESSARY AND sample_connection.interval_CPU_percent >= 3.0 AND sample_connection.connection_number BETWEEN 1 AND 999999999 GROUP BY sample_detail.sample_recorded_at, "Server CPU %" HAVING "High CPU Conn Count" >= 5 ) SELECT high_cpu_connection_count."High CPU Conn Count" AS "High CPU Conn Count", COUNT(*) AS "Sample Count", CAST ( AVG ( high_cpu_connection_count."Server CPU %" ) AS DECIMAL ( 5 ) ) AS "Avg Server CPU %" FROM high_cpu_connection_count GROUP BY high_cpu_connection_count."High CPU Conn Count" ORDER BY high_cpu_connection_count."High CPU Conn Count" DESC; High CPU Conn Count Sample Count Avg Server CPU % ------------------- ------------ ---------------- 7 3 54 6 18 40 5 39 35
-- The conn_cpu_sec view computes the CPU seconds for each connection. -- The CASE expression tries to find a non-empty value to identify the user. -- The INNER JOIN sampling_options ensures the sampling session has not been deleted. -- The final SELECT computes the CPU seconds for each user and displays the TOP 10. WITH conn_cpu_sec AS ( SELECT sample_connection.connection_id_string AS "Conn Id", CASE WHEN TRIM ( COALESCE ( sample_connection.Userid, '' ) ) <> '' THEN sample_connection.Userid WHEN TRIM ( COALESCE ( sample_connection.Name, '' ) ) <> '' THEN sample_connection.Name WHEN TRIM ( COALESCE ( sample_connection.OSUser, '' ) ) <> '' THEN sample_connection.OSUser ELSE TRIM ( COALESCE ( sample_connection.NodeAddress, '' ) ) END CASE AS "User", MAX ( sample_connection.ApproximateCPUTime ) AS "Conn CPU Sec" FROM sample_connection INNER JOIN sampling_options ON sampling_options.sampling_id = sample_connection.sampling_id WHERE sample_connection.sampling_id = 5 -- CHANGE THIS IF NECESSARY GROUP BY sample_connection.connection_id_string, sample_connection.Userid, sample_connection.Name, sample_connection.OSUser, sample_connection.NodeAddress ) SELECT TOP 10 CAST ( LEFT ( conn_cpu_sec."User", 30 ) AS VARCHAR ( 30 ) ) AS "User", CAST ( SUM ( conn_cpu_sec."Conn CPU Sec" ) AS DECIMAL ( 11, 1 ) ) AS "Total CPU Sec", CAST ( COUNT(*) AS DECIMAL ( 11 ) ) AS "Conn Count", CAST ( "Total CPU Sec" / "Conn Count" AS DECIMAL ( 11 ) ) AS "Avg CPU Sec" FROM conn_cpu_sec GROUP BY conn_cpu_sec."User" ORDER BY "Total CPU Sec" DESC, "User" ASC; User Total CPU Sec Conn Count Avg CPU Sec ------------------------------ ------------- ------------- ------------- DBA 339062.9 2423 140 A.Fernandez 260418.5 545 478 K.Nguyen 142775.2 8923 16 Dbremote 141849.5 1661 85 B.Walker 83559.2 166 503 F.Thomas 49543.3 497 100 C.Lee 38166.2 758 50 A.Mcpherson 32171.4 1494 22 F.Gonzalez 29424.4 786 37 W.Harris 28370.5 123 231 ...
-- Each connection is uniquely identified by the column sample_connection.connection_id_string. -- The sample_connection.ApproximateCPUTime value increases with each successive row so the -- MAX() value is the total CPU usage for that one connection. WITH conn_cpu_sec AS ( SELECT sample_connection.sampling_id, sample_connection.connection_id_string, sample_connection.Userid, MAX ( sample_connection.ApproximateCPUTime ) AS "Conn CPU Sec" FROM sample_connection INNER JOIN sampling_options ON sampling_options.sampling_id = sample_connection.sampling_id GROUP BY sample_connection.sampling_id, sample_connection.connection_id_string, sample_connection.Userid ) SELECT TOP 10 CAST ( LEFT ( conn_cpu_sec.Userid, 30 ) AS VARCHAR ( 30 ) ) AS "User", CAST ( COUNT(*) AS DECIMAL ( 11 ) ) AS "Conn Count", CAST ( SUM ( conn_cpu_sec."Conn CPU Sec" ) AS DECIMAL ( 11, 1 ) ) AS "Total CPU Sec", CAST ( "Total CPU Sec" / "Conn Count" AS DECIMAL ( 11 ) ) AS "Avg CPU Sec" FROM conn_cpu_sec WHERE conn_cpu_sec.sampling_id = 3 -- CHANGE THIS IF NECESSARY GROUP BY conn_cpu_sec.Userid ORDER BY "Total CPU Sec" DESC, "User" ASC; User Conn Count Total CPU Sec Avg CPU Sec ------------------------------ ------------- ------------- ------------- autoGen 3 34294.1 11431 DBA 51971 25320.5 0 a.Fernandez 515 23877.9 46 j.Garcia 17 14784.2 870 RPTAcctg 1863 14339.4 8 m.Sanchez 12 14245.4 1187 e.Perez 7 12846.4 1835 a.Barbassa 22 10542.1 479 d.Rodriguez 25 10155.5 406 f.Jones 11 9531.7 867
-- Intra-query parallelism child connections all have "INT: EXCHANGE" as their connection names. -- Each connection is uniquely identified by the column sample_connection.connection_id_string -- which looks like 3-1001749445-20181204221531-704 where the -- "3" is the sampling_id, the -- "1001749445" is the connection_number and the -- "20181204221531-704" is the login_time 2018-12-04 22:15:31.704. SELECT TOP 10 DATEFORMAT ( MAX ( sample_detail.sample_recorded_at ), 'yyyy mm dd hh:nn:ss' ) AS "Recorded At", CAST ( LEFT ( sample_connection.connection_id_string, 32 ) AS VARCHAR ( 32 ) ) AS "INT: EXCHANGE Conn Id String", CAST ( MAX ( sample_connection.ApproximateCPUTime ) AS DECIMAL ( 11, 1 ) ) AS "Conn CPU Sec" FROM sample_connection INNER JOIN sampling_options ON sampling_options.sampling_id = sample_connection.sampling_id INNER JOIN sample_detail ON sample_detail.sample_set_number = sample_connection.sample_set_number WHERE sample_connection.sampling_id = 3 -- CHANGE THIS IF NECESSARY AND sample_connection.Name = 'INT: EXCHANGE' GROUP BY sample_connection.connection_id_string, sample_connection.Name ORDER BY "Conn CPU Sec" DESC; Recorded At INT: EXCHANGE Conn Id String Conn CPU Sec ------------------- -------------------------------- ------------- 2018 12 04 22:16:11 3-1001749445-20181204221531-704 55.1 2018 12 04 22:16:11 3-1001749442-20181204221531-704 55.0 2018 12 04 22:16:11 3-1001749444-20181204221531-704 54.4 2018 12 04 22:16:11 3-1001749439-20181204221531-704 53.9 2018 12 04 22:16:11 3-1001749446-20181204221531-704 53.5 2018 12 04 22:16:11 3-1001749440-20181204221531-704 52.1 2018 12 04 22:16:11 3-1001749441-20181204221531-704 51.5 2018 12 04 22:16:11 3-1001749443-20181204221531-704 50.6 2018 12 06 22:12:35 3-1000163894-20181206221150-822 42.9 2018 12 06 22:12:35 3-1000163888-20181206221150-822 42.9
-- CurrentCacheSize Min/Max SELECT sampling_id, MIN ( CurrentCacheSize ) AS MIN_CurrentCacheSize_k, CAST ( rroad_f_bytes_as_kmg ( MIN_CurrentCacheSize_k * 1024 ) AS VARCHAR ( 20 ) ) AS MIN_CurrentCacheSize_kmg, MAX ( CurrentCacheSize ) AS MAX_CurrentCacheSize_k, CAST ( rroad_f_bytes_as_kmg ( MAX_CurrentCacheSize_k * 1024 ) AS VARCHAR ( 20 ) ) AS MAX_CurrentCacheSize_kmg FROM sample_detail GROUP BY sampling_id ORDER BY sampling_id; sampling_id MIN_CurrentCacheSize_k MIN_CurrentCacheSize_kmg MAX_CurrentCacheSize_k MAX_CurrentCacheSize_kmg ----------- ---------------------- ------------------------ ---------------------- ------------------------ 1 52112 51M 52152 51M 2 52932 52M 53036 52M 3 53200 52M 53240 52M 4 53004 52M 53052 52M 5 53004 52M 53052 52M 6 53200 52M 53256 52M 7 337712 330M 3272848 3.1G 8 13381808 13G 13381808 13G 9 3272832 3.1G 3272832 3.1G --------------------------- -- CurrentCacheSize History SELECT sampling_id, sample_set_number, sample_recorded_at, CurrentCacheSize AS CurrentCacheSize_k, CAST ( rroad_f_bytes_as_kmg ( CurrentCacheSize * 1024 ) AS VARCHAR ( 20 ) ) AS CurrentCacheSize_kmg FROM sample_detail WHERE sampling_id = 7 ORDER BY sampling_id, sample_set_number; sampling_id sample_set_number sample_recorded_at CurrentCacheSize_k CurrentCacheSize_kmg ----------- -------------------- ----------------------- -------------------- -------------------- 7 12946 2019-10-07 07:55:57.059 337712 330M 7 12953 2019-10-07 07:56:05.899 337712 330M 7 12959 2019-10-07 07:56:15.807 337712 330M 7 12966 2019-10-07 07:56:25.729 337712 330M 7 12973 2019-10-07 07:56:35.685 337712 330M 7 12980 2019-10-07 07:56:45.538 337712 330M 7 12987 2019-10-07 07:56:55.469 337712 330M 7 12994 2019-10-07 07:57:05.399 337712 330M 7 13001 2019-10-07 07:57:15.305 337712 330M 7 13008 2019-10-07 07:57:25.235 337712 330M 7 13015 2019-10-07 07:57:35.134 337712 330M 7 13022 2019-10-07 07:57:45.072 337712 330M 7 13029 2019-10-07 07:57:54.982 337712 330M 7 13036 2019-10-07 07:58:04.891 337712 330M 7 13043 2019-10-07 07:58:14.84 337712 330M 7 13050 2019-10-07 07:58:24.756 337712 330M 7 13057 2019-10-07 07:58:34.68 337712 330M 7 13064 2019-10-07 07:58:44.62 (NULL) - 7 13065 2019-10-07 07:58:44.62 (NULL) - 7 13073 2019-10-07 07:58:56.713 3272848 3.1G
-- Here's how this query works: -- -- - The WITH clause creates a temporary view that is used in the SELECT * FROM block at the bottom. -- -- - The FROM sampling_options selects one row for each target database. -- -- - The CROSS APPLY selects one sample_header row for each target database. That row is the most -- recent successful sample for that target. If sampling is running then it will be a recent row. -- If sampling is stopped then it might be an old row. Either way, it is the "most recent -- successful sample" for each target database. -- -- - The CROSS APPLY clause is used instead of INNER JOIN because the inner FROM sample_header -- clause refers to a column in a different table in the outer FROM clause, something you -- can't do with INNER JOIN. -- -- - The two LEFT OUTER JOIN clauses gather up all the blocked (victim) sample_connection rows -- plus the corresponding blocking (evil-doer) sample_connection rows. -- -- - The LEFT OUTER JOIN clause is used instead of INNER JOIN so the view will return at least -- one row for each target database even if it doesn't have any blocked connections. WITH block AS ( SELECT sampling_options.sampling_id AS sampling_id, latest_header.sample_set_number AS sample_set_number, latest_header.sample_finished_at AS recorded_at, IF sampling_options.selected_tab = 1 THEN STRING ( 'DSN: ', sampling_options.selected_name ) ELSE STRING ( 'String: ', sampling_options.selected_name ) END IF AS target_database, COALESCE ( blocked_connection.Userid, '' ) AS blocked_Userid, COALESCE ( blocked_by_connection.Userid, '' ) AS blocked_by_Userid, COALESCE ( blocked_connection.blocker_reason, '' ) AS reason, COALESCE ( blocked_connection.ReqStatus, '' ) AS ReqStatus, COALESCE ( blocked_connection.blocker_table_name, '' ) AS blocker_table_name FROM sampling_options -- one row per target database CROSS APPLY ( SELECT TOP 1 * -- most recent successful sample for each target FROM sample_header WHERE sample_header.sampling_id = sampling_options.sampling_id AND sample_header.sample_lost = 'N' ORDER BY sample_header.sample_set_number DESC ) AS latest_header LEFT OUTER JOIN ( SELECT * -- all the blocked connections in the latest sample FROM sample_connection WHERE sample_connection.BlockedOn <> 0 ) AS blocked_connection ON blocked_connection.sampling_id = sampling_options.sampling_id AND blocked_connection.sample_set_number = latest_header.sample_set_number LEFT OUTER JOIN sample_connection AS blocked_by_connection -- the corresponding blocking connections ON blocked_by_connection.sampling_id = sampling_options.sampling_id AND blocked_by_connection.sample_set_number = blocked_connection.sample_set_number AND blocked_by_connection.connection_number = blocked_connection.BlockedOn ) SELECT * FROM block ORDER BY block.target_database, block.blocked_Userid;
WITH first_number AS ( SELECT CAST ( MIN ( sample_detail.sample_set_number ) AS DECIMAL ( 9 ) ) AS sample_set_number FROM sample_detail WHERE sample_detail.sampling_id = 1 -- CHANGE THIS IF NECESSARY AND sample_detail.sample_lost = 'N' ), last_number AS ( SELECT CAST ( MAX ( sample_detail.sample_set_number ) AS DECIMAL ( 9 ) ) AS sample_set_number FROM sample_detail WHERE sample_detail.sampling_id = 1 -- CHANGE THIS IF NECESSARY AND sample_detail.sample_lost = 'N' ), first_sample AS ( SELECT sample_detail.* FROM sample_detail INNER JOIN first_number ON first_number.sample_set_number = sample_detail.sample_set_number ), last_sample AS ( SELECT sample_detail.* FROM sample_detail INNER JOIN last_number ON last_number.sample_set_number = sample_detail.sample_set_number ), both AS ( SELECT CAST ( first_sample.sampling_id AS DECIMAL ( 3 ) ) AS "Id", first_sample.sample_recorded_at AS "First At", last_sample.sample_recorded_at AS "Last At", CAST ( first_sample.sample_set_number AS DECIMAL ( 9 ) ) AS "First Sample", CAST ( last_sample.sample_set_number AS DECIMAL ( 9 ) ) AS "Last Sample", CAST ( first_sample.FileSize * first_sample.page_size AS DECIMAL ( 15 ) ) AS "First Bytes", CAST ( last_sample.FileSize * last_sample.page_size AS DECIMAL ( 15 ) ) AS "Last Bytes", CAST ( first_sample.DBFileFragments AS DECIMAL ( 9 ) ) AS "First Frags", CAST ( last_sample.DBFileFragments AS DECIMAL ( 9 ) ) AS "Last Frags", DATEDIFF ( DAY, "First At", "Last At" ) AS "Days Diff", "Last Bytes" - "First Bytes" AS "Bytes Diff", "Last Frags" - "First Frags" AS "Frags Diff", CAST ( IF "Days Diff" <= 0 THEN 0 ELSE "Bytes Diff" / "Days Diff" END IF AS DECIMAL ( 15 ) ) AS "Bytes/day", CAST ( IF "Days Diff" <= 0 THEN 0 ELSE "Frags Diff" / "Days Diff" END IF AS DECIMAL ( 11 ) ) AS "Frags/day", CAST ( "Bytes/day" * 365.0 AS DECIMAL ( 15 ) ) AS "Bytes/year", CAST ( "Frags/day" * 365.0 AS DECIMAL ( 11 ) ) AS "Frags/year" FROM first_sample, last_sample ) SELECT "Id", CAST ( "Days Diff" AS DECIMAL ( 4 ) ) AS "Days", CAST ( rroad_f_bytes_as_kmg ( "First Bytes" ) AS VARCHAR ( 10 ) ) AS "First Bytes", CAST ( rroad_f_bytes_as_kmg ( "Last Bytes" ) AS VARCHAR ( 10 ) ) AS "Last Bytes", CAST ( rroad_f_bytes_as_kmg ( "Bytes Diff" ) AS VARCHAR ( 10 ) ) AS "Bytes +", CAST ( rroad_f_bytes_as_kmg ( "Bytes/day" ) AS VARCHAR ( 10 ) ) AS "Bytes/day", CAST ( rroad_f_bytes_as_kmg ( "Bytes/year" ) AS VARCHAR ( 10 ) ) AS "Bytes/year", CAST ( rroad_f_number_with_commas ( "First Frags" ) AS VARCHAR ( 10 ) ) AS "First Frags", CAST ( rroad_f_number_with_commas ( "Last Frags" ) AS VARCHAR ( 10 ) ) AS "Last Frags", CAST ( rroad_f_number_with_commas ( "Frags Diff" ) AS VARCHAR ( 10 ) ) AS "Frags +", CAST ( rroad_f_number_with_commas ( "Frags/day" ) AS VARCHAR ( 10 ) ) AS "Frags/day", CAST ( rroad_f_number_with_commas ( "Frags/year" ) AS VARCHAR ( 10 ) ) AS "Frags/year" FROM both; Id Days First Bytes Last Bytes Bytes + Bytes/day Bytes/year First Frags Last Frags Frags + Frags/day Frags/year ----- ------ ----------- ---------- ---------- ---------- ---------- ----------- ---------- ---------- ---------- ---------- 3 4 64G 64G 21M 5.1M 1.8G 63 20 -43 -11 -4015
-- Display the authentication attributes that support the Foxhound Administrator Authentication feature. SELECT * FROM authentication; ----------------------------------------------------------------------------------------------------------------------- artificial_id,authentication_enabled,user_name,cookie_max_age_seconds,session_timeout_minutes,session_id,debug_messages 1,'Y','admin',43200,720,'foxhound5-sessionid-12632-4963740442229369','OFF'
-- An external client connection has a connection number in the range 1 to 999,999,999. -- An extremely high external client connection rate may indicate unusual application behavior where many -- short-lived connections are consuming resources but going unnoticed by the Foxhound sampling process. -- The WITH maxconn view returns the highest-numbered client connection in each sample. -- The WITH diffconn view returns the change in the highest client connection number for each sample. -- The WITH connrate view returns the new connection rate for each sample. -- Note: This query might work best for continuous unbroken ranges of successful Foxhound samples, -- with no "Sampling stopped" or other gaps. WITH maxconn AS ( -- The BETWEEN 1 AND 999999999 predicate includes external client connections and excludes -- high-numbered internal, event and web service connections. -- The GROUP BY and MAX are used to determing the highest-numbered connection in each sample. SELECT sample_connection.sampling_id AS sampling_id, sample_connection.sample_set_number AS sample_set_number, MAX ( sample_connection.connection_number ) AS max_connection_number FROM sample_connection WHERE sample_connection.connection_number BETWEEN 1 AND 999999999 GROUP BY sample_connection.sampling_id, sample_connection.sample_set_number ), diffconn AS ( -- The WINDOW moving_pair contains the previous and current samples. -- The diff_connection_number is difference between the highest connection number in the -- previous sample and the highest connection number in the current sample. SELECT maxconn.sampling_id AS sampling_id, maxconn.sample_set_number AS sample_set_number, maxconn.max_connection_number AS max_connection_number, FIRST_VALUE ( maxconn.max_connection_number ) OVER moving_pair AS prev_max_connection_number, GREATER ( maxconn.max_connection_number - prev_max_connection_number, 0 ) AS diff_connection_number FROM maxconn WINDOW moving_pair AS ( ORDER BY maxconn.sample_set_number ROWS BETWEEN 1 PRECEDING AND CURRENT ROW ) ), connrate AS ( -- The connections_per_hour is the rate at which the connection number increased in the -- the interval between two consecutive samples. -- The rate may vary greatly because the interval is very short (only a few seconds) -- and the rate is calculated for a much longer period (an hour). -- Short high-rate bursts may not be significant, but sustained high rates probably are. SELECT diffconn.sampling_id, sample_detail.sample_set_number, diffconn.max_connection_number, diffconn.prev_max_connection_number, diffconn.diff_connection_number, sample_detail.interval_msec, CAST ( ( CAST ( diffconn.diff_connection_number AS DECIMAL ( 20, 6 ) ) / CAST ( sample_detail.interval_msec AS DECIMAL ( 20, 6 ) ) ) * ( 1000.0 * 60.0 * 60.0 ) AS INTEGER ) AS connections_per_hour FROM sample_detail INNER JOIN diffconn ON sample_detail.sample_set_number = diffconn.sample_set_number ) SELECT [ see examples below ]; ----------------------------------------------------------------------------------------- -- Example 1 : Top 10 external connection rates. -- The sample_set_number BETWEEN predicate specifies a continuous range of successful -- Foxhound samples, with no "Sampling stopped" or other gaps. -- The interval_msec BETWEEN predicate excludes very short and long intervals because -- they may be skewed outliers. SELECT TOP 10 connrate.sample_set_number, CAST ( STRING ( 'From ', prev_max_connection_number, ' to ', max_connection_number, ' = ', diff_connection_number ) AS VARCHAR ( 35 ) ) AS "Connection Number Difference", connrate.connections_per_hour AS "Connections per hour", CAST ( connrate.interval_msec AS INTEGER ) AS "Interval msec", sample_detail.sample_recorded_at FROM connrate INNER JOIN sample_detail ON sample_detail.sample_set_number = connrate.sample_set_number WHERE connrate.sampling_id = 3 AND connrate.sample_set_number BETWEEN 2481232 AND 2507618 AND connrate.interval_msec BETWEEN 6000 AND 14000 ORDER BY connrate.connections_per_hour DESC; sample_set_number Connection Number Difference Connections per hour Interval msec sample_recorded_at -------------------- ----------------------------------- -------------------- ------------- ----------------------- 2506123 From 1508938 to 1509746 = 808 482228 6032 2018-12-07 15:25:48.197 2490720 From 1275806 to 1277106 = 1300 478478 9781 2018-12-07 10:14:18.425 2493696 From 1324830 to 1326024 = 1194 445015 9659 2018-12-07 11:14:14.141 2493768 From 1327079 to 1327909 = 830 394976 7565 2018-12-07 11:15:38.808 2493578 From 1323136 to 1323846 = 710 392084 6519 2018-12-07 11:11:55.161 2486362 From 1210397 to 1211057 = 660 386907 6141 2018-12-07 08:46:17.425 2502487 From 1458551 to 1459492 = 941 353685 9578 2018-12-07 14:12:46.741 2491655 From 1289817 to 1290696 = 879 346214 9140 2018-12-07 10:33:25.098 2491156 From 1282426 to 1283145 = 719 330659 7828 2018-12-07 10:23:16.214 2507252 From 1525404 to 1525954 = 550 313688 6312 2018-12-07 15:48:35.967 ----------------------------------------------------------------------------------------- -- Example 2 : Average external connections per hour for a range of samples. -- The sample_set_number BETWEEN predicate specifies a continuous range of successful -- Foxhound samples, with no "Sampling stopped" or other gaps. SELECT CAST ( AVG ( connections_per_hour ) AS INTEGER ) AS "Average Connections Per Hour" FROM connrate WHERE connrate.sampling_id = 3 AND connrate.sample_set_number BETWEEN 2481232 AND 2507618; Average Connections Per Hour ---------------------------- 45116
-- These column values may be copy-and-pasted into the Go to: field on the Sample History -- page to go directly to the relevant Alert, All Clear and Cancelled messages: -- -- alert.alert_occurrence -- all_clear.all_clear_occurrence -- alert_cancelled.alert_cancelled_occurrence -- -- Note that the sample_set_number columns now contain copies of the values listed above; i.e., -- the sample_set_number values used to become out-of-date, now they are simply redundant :) SELECT CAST ( rroad_f_limit_string_length_with_ellipsis ( STRING ( sampling_options.sampling_id, ' - ', sampling_options.selected_name, IF sampling_options.selected_tab = '1' THEN ' (DSN)' ELSE '' END IF ), 50, 'right' ) AS VARCHAR ( 50 ) ) AS "Target Database", CAST ( STRING ( alert.alert_number, ' ', alert_title.alert_title ) AS VARCHAR ( 24 ) ) AS "Alert", CAST ( alert.alert_occurrence AS INTEGER ) AS "Go to: Alert", CAST ( COALESCE ( all_clear.all_clear_occurrence, alert_cancelled.alert_cancelled_occurrence ) AS INTEGER ) AS "Go to: All Clear/Cancelled", CAST ( DATEDIFF ( SECOND, alert.alert_in_effect_at, COALESCE ( all_clear.alert_all_clear_at, alert_cancelled.alert_all_clear_at ) ) AS DECIMAL ( 6 ) ) AS "Seconds" FROM sampling_options INNER JOIN alert ON alert.sampling_id = sampling_options.sampling_id LEFT OUTER JOIN all_clear ON all_clear.sampling_id = sampling_options.sampling_id AND all_clear.alert_number = alert.alert_number AND all_clear.alert_in_effect_at = alert.alert_in_effect_at LEFT OUTER JOIN alert_cancelled ON alert_cancelled.sampling_id = sampling_options.sampling_id AND alert_cancelled.alert_number = alert.alert_number AND alert_cancelled.alert_in_effect_at = alert.alert_in_effect_at INNER JOIN alert_title ON alert.alert_number = alert_title.alert_number WHERE alert.alert_number = 1 AND alert.alert_is_clear_or_cancelled = 'Y' AND "Seconds" >= 60 ORDER BY sampling_options.sampling_id, "Go to: All Clear/Cancelled" DESC; Target Database Alert Go to: Alert Go to: All Clear/Cancelled Seconds -------------------------------------------------- ------------------------ ------------ -------------------------- -------- 7 - ddd17 (DSN) 1 Database unresponsive 1731019 1744428 11202 7 - ddd17 (DSN) 1 Database unresponsive 1717292 1730989 11442
-- Copy a sample_set_number and paste it into the "Go to:" field on the -- Sample History page to see what was going on when the connection became blocked. SELECT sample_connection.sampling_id, MIN ( sample_connection.sample_set_number ) AS sample_set_number, sample_connection.connection_number FROM sample_connection WHERE COALESCE ( sample_connection.BlockedOn, 0 ) <> 0 GROUP BY sample_connection.sampling_id, sample_connection.connection_id_string, sample_connection.connection_number, sample_connection.LastReqTime ORDER BY sample_set_number DESC, sample_connection.connection_number ASC; sampling_id sample_set_number connection_number ----------- -------------------- -------------------- 1 4361 2218 1 4319 21166 1 4057 21049 1 3984 16049 1 3941 2232 1 3941 17936 1 3939 17936 1 3936 15683 1 3917 15683
SELECT DATEFORMAT ( MIN ( sample_header.sample_finished_at ), 'Mmm Dd yyyy Hh:nn:ss AA' ) AS from_timestamp, DATEFORMAT ( MAX ( sample_header.sample_finished_at ), 'Mmm Dd yyyy Hh:nn:ss AA' ) AS to_timestamp FROM sample_header INNER JOIN sampling_options ON sampling_options.sampling_id = sample_header.sampling_id WHERE sample_header.sample_lost = 'N'; from_timestamp to_timestamp ----------------------- ----------------------- Dec 3 2018 11:43:51 AM Dec 23 2018 2:19:53 PM
SELECT sample_header.sampling_id AS sampling_id, IF sampling_options.selected_tab = 1 THEN 'DSN' ELSE 'String' END IF AS connection_type, CAST ( sampling_options.selected_name AS VARCHAR ( 50 ) ) AS target_database, DATEFORMAT ( MIN ( sample_header.sample_finished_at ), 'Mmm Dd yyyy Hh:nn:ss AA' ) AS from_timestamp, DATEFORMAT ( MAX ( sample_header.sample_finished_at ), 'Mmm Dd yyyy Hh:nn:ss AA' ) AS to_timestamp FROM sample_header INNER JOIN sampling_options ON sampling_options.sampling_id = sample_header.sampling_id WHERE sample_header.sample_lost = 'N' GROUP BY sampling_id, connection_type, target_database ORDER BY target_database, connection_type; sampling_id connection_type target_database from_timestamp to_timestamp ----------- --------------- -------------------------------------------------- ----------------------- ----------------------- 3 DSN Accounting Dec 3 2018 11:43:51 AM Dec 7 2018 3:56:07 PM 6 DSN Inventory17 Dec 23 2018 1:06:21 PM Dec 23 2018 2:21:22 PM
-- High connection-level CPU usage with HTML formatting WITH high_conn AS ( SELECT DATEFORMAT ( sample_detail.sample_recorded_at, 'yyyy-mm-dd hh:nn:ss' ) AS "Conn Sample At", CAST ( sample_detail.interval_CPU_percent AS DECIMAL ( 6, 1 ) ) AS "Server", -- "Server CPU %", RIGHT ( STRING ( ' ', sample_connection.interval_CPU_percent ), 6 ) AS "CPU %", -- "Conn CPU %", IF sample_connection.child_connection_count = 0 THEN ' ' ELSE RIGHT ( STRING ( ' ', sample_connection.child_connection_count ), 4 ) END IF AS "Child", -- "Child Conns", CAST ( sample_connection.connection_number AS DECIMAL ( 9 ) ) AS "Conn #", STRING ( sample_connection.Userid, ' - ', sample_connection.Name ) AS "User - Conn Name", COALESCE ( sample_connection.LastStatement, '' ) AS "LastStatement" FROM sample_detail INNER JOIN sample_connection ON sample_connection.sample_set_number = sample_detail.sample_set_number INNER JOIN sampling_options ON sampling_options.sampling_id = sample_connection.sampling_id WHERE sample_connection.interval_CPU_percent > 10.0 AND sample_detail.interval_msec >= 5000 -- eliminate abnormally short sample intervals AND sample_connection.connection_number BETWEEN 1 AND 999999999 ) SELECT STRING ( '<HTML><HEAD>', '<TITLE>High Connection-Level CPU Usage queried at ', DATEFORMAT ( CURRENT TIMESTAMP, 'yyyy-mm-dd hh:nn AA' ), '</TITLE>', '<STYLE>', 'BODY { font-size: 100%; font-family: "LUCIDA CONSOLE", "COURIER NEW", "COURIER", "SANS-SERIF"; }', 'TD { font-size: 1em; font-family: "LUCIDA CONSOLE", "COURIER NEW", "COURIER", "SANS-SERIF"; padding: 3px; vertical-align: top; white-space: nowrap; }', '</STYLE>', '</HEAD>', '<BODY>', '<H3>High Connection-Level CPU Usage queried at ', DATEFORMAT ( CURRENT TIMESTAMP, 'yyyy-mm-dd hh:nn AA' ), '</H3>', '<H4>ORDER BY "Conn CPU %" DESC, "Conn #" ASC, "Conn Sample At"</H4>', '<TABLE>', '<TR>', '<TD STYLE="padding-left: 1em; vertical-align: bottom;"><B>Conn Sample At</B></TD>', '<TD STYLE="padding-left: 1em; vertical-align: bottom;"><B>Conn<BR>CPU %</B></TD>', '<TD STYLE="padding-left: 1em; vertical-align: bottom;"><B>Child<BR>Conns</B></TD>', '<TD STYLE="padding-left: 1em; vertical-align: bottom;"> <B>Conn #</B></TD>', '<TD STYLE="padding-left: 1em; vertical-align: bottom;"> <B>User - Conn Name</B></TD>', '<TD STYLE="padding-left: 1em; vertical-align: bottom;"><B>Last Statement</B></TD>', '</TR>', LIST ( STRING ( '<TR>', '<TD STYLE="padding-left: 1em; ">', high_conn."Conn Sample At", '</TD>', '<TD STYLE="padding-left: 1em; ">', high_conn."CPU %", '</TD>', '<TD STYLE="padding-left: 1em; ">', high_conn."Child", '</TD>', '<TD STYLE="padding-left: 1em; ">', high_conn."Conn #", '</TD>', '<TD>', '<DIV STYLE="max-width: 500px; padding-left: 1em; white-space: normal; word-wrap: break-word;">', high_conn."User - Conn Name", '</DIV>', '</TD>', '<TD>', '<DIV STYLE="max-width: 900px; padding-left: 1em; white-space: normal; word-wrap: break-word;">', high_conn.LastStatement, '</DIV>', '</TD>', '</TR>' ), '\X0D\X0A' ORDER BY "CPU %" DESC, "Conn #" ASC, "Conn Sample At" ASC ), '</TABLE></BODY></HTML>' ) FROM high_conn; OUTPUT TO 'C:\\TEMP\\High_connection_level_CPU_usage.html' FORMAT TEXT DELIMITED BY '' HEXADECIMAL ASIS QUOTE ''; High Connection-Level CPU Usage queried at 2019-05-16 09:10 AMORDER BY "Conn CPU %" DESC, "Conn #" ASC, "Conn Sample At"
Conn Sample At Conn
CPU %Child
ConnsConn # User - Conn Name Last Statement 2016-02-18 17:11:41 25.5 1764 yyy - aaa select ...2016-02-18 17:42:02 25.1 4068 yyy - bbb select ...
-- A high-CPU query is a long-running query that has been using a lot of CPU time. WITH -- 1. The long_running_query view identifies queries that have been executing for two or more samples. -- The connection_id_string matches connection samples for the same connection. -- The LastReqTime matches connection samples for the same request. -- The WHERE ReqStatus <> 'Idle' predicate includes 'Executing' and all 'Waiting...' status values. -- The HAVING clause eliminates short-running requests. long_running_query AS ( SELECT sample_connection.connection_id_string AS connection_id_string, sample_connection.LastReqTime AS LastReqTime, MIN ( sample_connection.sample_set_number ) AS from_sample_set_number, MAX ( sample_connection.sample_set_number ) AS to_sample_set_number, MAX ( sample_connection.interval_CPU_percent ) AS max_cpu, AVG ( sample_connection.interval_CPU_percent ) AS avg_cpu, AVG ( sample_connection.busy_percent ) AS avg_busy -- MAX busy omitted, will always be 99% FROM sample_connection WHERE sample_connection.ReqStatus <> 'Idle' AND sample_connection.time_since_last_request > 0 AND TRIM ( COALESCE ( sample_connection.LastStatement, '' ) ) <> '' GROUP BY sample_connection.connection_id_string, sample_connection.LastReqTime, sample_connection.LastStatement HAVING from_sample_set_number <> to_sample_set_number ), -- 2. The highest_cpu_query view formats the columns for display and selects the highest CPU queries. -- The SELECT builds a formatted result set for the queries identified by the WITH long_running_query view. -- Feel free to trim and/or expand the SELECT list, and/or add WHERE and ORDER BY clauses. -- The FROM long_running_query provides the connection samples of interest. -- The INNER JOIN sample_connection provides access to additional connection sample columns. -- The INNER JOIN sample_header provides access to the sample_finished_at column -- The INNER JOIN sampling_options prevents retrieval of connection samples from "zombie" (deleted) target databases. highest_cpu_query AS ( SELECT TOP 100 sampling_options.sampling_id AS "ID", DATEFORMAT ( long_running_query.LastReqTime, 'yyyy-mm-dd hh:nn:ss' ) AS "From", DATEFORMAT ( sample_header.sample_finished_at, 'yyyy-mm-dd hh:nn:ss' ) AS "To", long_running_query.from_sample_set_number AS "From Sample", long_running_query.to_sample_set_number AS "To Sample", CAST ( sample_connection.time_since_last_request / 1000 AS INTEGER ) AS "Seconds", CAST ( long_running_query.max_cpu AS DECIMAL ( 9, 1 ) ) AS "Max CPU %", CAST ( long_running_query.avg_cpu AS DECIMAL ( 9, 1 ) ) AS "Avg CPU %", CAST ( long_running_query.avg_busy AS DECIMAL ( 9, 1 ) ) AS "Avg Busy %", sample_connection.child_connection_count AS "Child Conns", STRING ( sample_connection.connection_number, ' / ', IF TRIM ( COALESCE ( sample_connection.Userid, '' ) ) = '' THEN '-' ELSE sample_connection.Userid ENDIF, ' / ', IF TRIM ( COALESCE ( sample_connection.OSUser, '' ) ) = '' THEN '-' ELSE sample_connection.OSUser ENDIF, ' / ', IF TRIM ( COALESCE ( sample_connection.NodeAddress, '' ) ) IN ( '', 'NA' ) THEN '-' ELSE sample_connection.NodeAddress ENDIF, ' / ', IF COALESCE ( sample_connection.Name, '' ) = '' THEN '-' ELSE sample_connection.Name ENDIF ) AS "Conn #, User, OS User, IP, Name", sample_connection.LastStatement AS LastStatement FROM long_running_query INNER JOIN sample_connection ON sample_connection.connection_id_string = long_running_query.connection_id_string AND sample_connection.sample_set_number = long_running_query.to_sample_set_number INNER JOIN sample_header ON sample_header.sample_set_number = sample_connection.sample_set_number INNER JOIN sampling_options ON sampling_options.sampling_id = sample_connection.sampling_id ORDER BY "Avg CPU %" DESC, "To Sample" ASC ) -- 3. The final SELECT prepares the HTML for ouput. SELECT STRING ( '<HTML><HEAD>', '<TITLE>Overview of High-CPU Queries as at ', DATEFORMAT ( CURRENT TIMESTAMP, 'yyyy-mm-dd hh:nn AA' ), '</TITLE>', '<STYLE>', 'BODY { font-size: 100%; font-family: "LUCIDA CONSOLE", "COURIER NEW", "COURIER", "SANS-SERIF"; }', 'TD { font-size: 1em; font-family: "LUCIDA CONSOLE", "COURIER NEW", "COURIER", "SANS-SERIF"; padding: 3px; vertical-align: top; white-space: nowrap; }', '</STYLE>', '</HEAD>', '<BODY>', '<H3>Overview of High-CPU Queries as at ', DATEFORMAT ( CURRENT TIMESTAMP, 'yyyy-mm-dd hh:nn AA' ), '</H3>', '<TABLE>', '<TR>', '<TD STYLE="text-align: center; vertical-align: bottom;"><B>ID</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"><B>From</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"><B>To</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"> <B>From<BR> Sample</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"> <B>To<BR> Sample</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"> <B>Seconds</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"><B>Max<BR>CPU<BR>%</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"><B>Avg<BR>CPU<BR>%</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"><B>Avg<BR>Busy<BR>%</B></TD>', '<TD STYLE="padding-left: 1em; vertical-align: bottom;"><B>Conn #, User, OS User, IP, Name</B></TD>', '<TD STYLE="padding-left: 1em; vertical-align: bottom;"><B>Last Statement</B></TD>', '</TR>', LIST ( STRING ( '<TR>', '<TD STYLE="text-align: center;">', highest_cpu_query."ID", '</TD>', '<TD STYLE="text-align: center;">', highest_cpu_query."From", '</TD>', '<TD STYLE="text-align: center;"> ', highest_cpu_query."To", '</TD>', '<TD STYLE="text-align: center;"> ', highest_cpu_query."From Sample", '</TD>', '<TD STYLE="text-align: center;"> ', highest_cpu_query."To Sample", '</TD>', '<TD STYLE="text-align: center;"> ', highest_cpu_query."Seconds", '</TD>', '<TD STYLE="text-align: center;">', highest_cpu_query."Max CPU %", '</TD>', '<TD STYLE="text-align: center;">', highest_cpu_query."Avg CPU %", '</TD>', '<TD STYLE="text-align: center;">', highest_cpu_query."Avg Busy %", '</TD>', '<TD>', '<DIV STYLE="max-width: 600px; padding-left: 1em; white-space: normal; word-wrap: break-word;">', highest_cpu_query."Conn #, User, OS User, IP, Name", '</DIV>', '</TD>', '<TD>', '<DIV STYLE="max-width: 800px; padding-left: 1em; white-space: normal; word-wrap: break-word;">', highest_cpu_query.LastStatement, '</DIV>', '</TD>', '</TR>' ), '\X0D\X0A' ORDER BY highest_cpu_query."Avg CPU %" DESC, highest_cpu_query."To Sample" ASC ), '</TABLE></BODY></HTML>' ) FROM highest_cpu_query; OUTPUT TO 'C:\\TEMP\\high_cpu_queries.html' FORMAT TEXT DELIMITED BY '' HEXADECIMAL ASIS QUOTE '';
-- List all the administrator users defined for the Foxhound Administrator Authentication feature. SELECT * FROM administrator ORDER BY user_name;
SELECT * FROM patch ORDER BY patch.file_name;
-- A long-running query is a SQL statement that has been executing for a long time on the same connection. -- The WITH long_running_query view identifies queries that have been executing for two or more samples. -- The connection_id_string matches connection samples for the same connection. -- The LastReqTime matches connection samples for the same request. -- The WHERE clause eliminates unqualified or uninteresting connection samples. -- The HAVING clause eliminates short-running requests. WITH long_running_query AS ( SELECT sample_connection.connection_id_string AS connection_id_string, sample_connection.LastReqTime AS LastReqTime, MIN ( sample_connection.sample_set_number ) AS from_sample_set_number, MAX ( sample_connection.sample_set_number ) AS to_sample_set_number FROM sample_connection WHERE sample_connection.ReqStatus = 'Executing' AND sample_connection.time_since_last_request > 0 AND TRIM ( COALESCE ( sample_connection.LastStatement, '' ) ) <> '' GROUP BY sample_connection.sampling_id, sample_connection.connection_id_string, sample_connection.LastReqTime HAVING from_sample_set_number <> to_sample_set_number ) -- The SELECT builds a formatted result set for the queries identified by the WITH long_running_query view. -- Feel free to trim and/or expand the SELECT list, and/or add WHERE and ORDER BY clauses. SELECT TOP 10 sampling_options.sampling_id, STRING ( sample_connection.connection_number, ' / ', IF TRIM ( COALESCE ( sample_connection.Userid, '' ) ) = '' THEN '-' ELSE sample_connection.Userid ENDIF, ' / ', IF TRIM ( COALESCE ( sample_connection.OSUser, '' ) ) = '' THEN '-' ELSE sample_connection.OSUser ENDIF, ' / ', IF TRIM ( COALESCE ( sample_connection.NodeAddress, '' ) ) IN ( '', 'NA' ) THEN '-' ELSE sample_connection.NodeAddress ENDIF, ' / ', IF COALESCE ( sample_connection.Name, '' ) = '' THEN '-' ELSE sample_connection.Name ENDIF ) AS "Conn #, User, OS User, IP, Name", long_running_query.to_sample_set_number, sample_header.sample_finished_at, CAST ( sample_connection.time_since_last_request / 1000 AS BIGINT ) AS elapsed_seconds, sample_connection.LastStatement AS LastStatement, sample_connection.LastPlanText AS LastPlanText -- The FROM long_running_query provides the connection samples of interest. -- The INNER JOIN sample_connection provides access to additional connection sample columns. -- The INNER JOIN sample_header provides access to the sample_finished_at column -- The INNER JOIN sampling_options prevents retrieval of connection samples from "zombie" (deleted) target databases. FROM long_running_query INNER JOIN sample_connection ON sample_connection.connection_id_string = long_running_query.connection_id_string AND sample_connection.sample_set_number = long_running_query.to_sample_set_number INNER JOIN sample_header ON sample_header.sample_set_number = sample_connection.sample_set_number INNER JOIN sampling_options ON sampling_options.sampling_id = sample_connection.sampling_id ORDER BY elapsed_seconds DESC, long_running_query.to_sample_set_number ASC;
-- A long-running query is a SQL statement that has been executing for a long time on the same connection. WITH -- 1. The long_running_query view identifies queries that have been executing for two or more samples. -- The connection_id_string matches connection samples for the same connection. -- The LastReqTime matches connection samples for the same request. -- The WHERE ReqStatus <> 'Idle' predicate includes 'Executing' and all 'Waiting...' status values. -- The HAVING clause eliminates short-running requests. long_running_query AS ( SELECT sample_connection.connection_id_string AS connection_id_string, sample_connection.LastReqTime AS LastReqTime, MIN ( sample_connection.sample_set_number ) AS from_sample_set_number, MAX ( sample_connection.sample_set_number ) AS to_sample_set_number, MAX ( sample_connection.interval_CPU_percent ) AS max_cpu, AVG ( sample_connection.interval_CPU_percent ) AS avg_cpu FROM sample_connection WHERE sample_connection.ReqStatus <> 'Idle' AND sample_connection.time_since_last_request > 0 AND TRIM ( COALESCE ( sample_connection.LastStatement, '' ) ) <> '' GROUP BY sample_connection.sampling_id, sample_connection.connection_id_string, sample_connection.LastReqTime HAVING from_sample_set_number <> to_sample_set_number ), -- 2. The longest_running_query view formats the columns for display and selects the longest running queries. -- The SELECT builds a formatted result set for the queries identified by the WITH long_running_query view. -- Feel free to trim and/or expand the SELECT list, and/or add WHERE and ORDER BY clauses. -- The FROM long_running_query provides the connection samples of interest. -- The INNER JOIN sample_connection provides access to additional connection sample columns. -- The INNER JOIN sample_header provides access to the sample_finished_at column -- The INNER JOIN sampling_options prevents retrieval of connection samples from "zombie" (deleted) target databases. longest_running_query AS ( SELECT TOP 100 sampling_options.sampling_id AS "ID", DATEFORMAT ( long_running_query.LastReqTime, 'yyyy-mm-dd hh:nn:ss' ) AS "From", DATEFORMAT ( sample_header.sample_finished_at, 'yyyy-mm-dd hh:nn:ss' ) AS "To", long_running_query.to_sample_set_number AS "To Sample", CAST ( sample_connection.time_since_last_request / 1000 AS INTEGER ) AS "Seconds", CAST ( long_running_query.max_cpu AS DECIMAL ( 9, 1 ) ) AS "Max CPU %", CAST ( long_running_query.avg_cpu AS DECIMAL ( 9, 1 ) ) AS "Avg CPU %", STRING ( sample_connection.connection_number, ' / ', IF TRIM ( COALESCE ( sample_connection.Userid, '' ) ) = '' THEN '-' ELSE sample_connection.Userid ENDIF, ' / ', IF TRIM ( COALESCE ( sample_connection.OSUser, '' ) ) = '' THEN '-' ELSE sample_connection.OSUser ENDIF, ' / ', IF TRIM ( COALESCE ( sample_connection.NodeAddress, '' ) ) IN ( '', 'NA' ) THEN '-' ELSE sample_connection.NodeAddress ENDIF, ' / ', IF COALESCE ( sample_connection.Name, '' ) = '' THEN '-' ELSE sample_connection.Name ENDIF ) AS "Conn #, User, OS User, IP, Name", sample_connection.LastStatement AS LastStatement FROM long_running_query INNER JOIN sample_connection ON sample_connection.connection_id_string = long_running_query.connection_id_string AND sample_connection.sample_set_number = long_running_query.to_sample_set_number INNER JOIN sample_header ON sample_header.sample_set_number = sample_connection.sample_set_number INNER JOIN sampling_options ON sampling_options.sampling_id = sample_connection.sampling_id ORDER BY "Seconds" DESC, "To Sample" ASC ) -- 3. The final SELECT prepares the HTML for ouput. SELECT STRING ( '<HTML><HEAD>', '<TITLE>Overview of Long-Running Queries as at ', DATEFORMAT ( CURRENT TIMESTAMP, 'yyyy-mm-dd hh:nn AA' ), '</TITLE>', '<STYLE>', 'BODY { font-size: 100%; font-family: "LUCIDA CONSOLE", "COURIER NEW", "COURIER", "SANS-SERIF"; }', 'TD { font-size: 1em; font-family: "LUCIDA CONSOLE", "COURIER NEW", "COURIER", "SANS-SERIF"; ', 'padding: 3px; vertical-align: top; white-space: nowrap; }', '</STYLE>', '</HEAD>', '<BODY>', '<H3>Overview of Long-Running Queries as at ', DATEFORMAT ( CURRENT TIMESTAMP, 'yyyy-mm-dd hh:nn AA' ), '</H3>', '<TABLE>', '<TR>', '<TD STYLE="text-align: center; vertical-align: bottom;"><B>ID</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"><B>From</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"><B>To</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"> <B>To<BR> Sample</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"> <B>Seconds</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"><B>Max<BR>CPU<BR>%</B></TD>', '<TD STYLE="text-align: center; vertical-align: bottom;"><B>Avg<BR>CPU<BR>%</B></TD>', '<TD STYLE="padding-left: 1em; vertical-align: bottom;"><B>Conn #, User, OS User, IP, Name</B></TD>', '<TD STYLE="padding-left: 1em; vertical-align: bottom;"><B>Last Statement</B></TD>', '</TR>', LIST ( STRING ( '<TR>', '<TD STYLE="text-align: center;">', longest_running_query."ID", '</TD>', '<TD STYLE="text-align: center;">', longest_running_query."From", '</TD>', '<TD STYLE="text-align: center;"> ', longest_running_query."To", '</TD>', '<TD STYLE="text-align: center;"> ', longest_running_query."To Sample", '</TD>', '<TD STYLE="text-align: center;"> ', longest_running_query."Seconds", '</TD>', '<TD STYLE="text-align: center;">', longest_running_query."Max CPU %", '</TD>', '<TD STYLE="text-align: center;">', longest_running_query."Avg CPU %", '</TD>', '<TD>', '<DIV STYLE="max-width: 600px; padding-left: 1em; white-space: normal; word-wrap: break-word;">', longest_running_query."Conn #, User, OS User, IP, Name", '</DIV>', '</TD>', '<TD>', '<DIV STYLE="max-width: 800px; padding-left: 1em; white-space: normal; word-wrap: break-word;">', longest_running_query.LastStatement, '</DIV>', '</TD>', '</TR>' ), '\X0D\X0A' ORDER BY longest_running_query."Seconds" DESC, longest_running_query."To Sample" ASC ), '</TABLE></BODY></HTML>' ) FROM longest_running_query; OUTPUT TO 'C:\\TEMP\\long_running_queries.html' FORMAT TEXT DELIMITED BY '' HEXADECIMAL ASIS QUOTE '';Overview of Long-Running Queries as at 2019-05-16 08:26 AM
ID From To To
SampleSeconds Max
CPU
%Avg
CPU
%Conn #, User, OS User, IP, Name Last Statement 1 2016-02-18 17:11:17 2016-02-18 17:13:31 4179 134 7.7 3.5 1764 / xxx / - / 192.168.39.2 / yyy select ...1 2016-02-18 21:00:17 2016-02-18 21:02:27 5554 130 2.5 2.5 26097 / xxx / - / - / SQL_DBC_7f06790 select ...
SELECT sample_header.sampling_id AS sampling_id, IF sampling_options.selected_tab = 1 THEN 'DSN' ELSE 'String' END IF AS connection_type, CAST ( sampling_options.selected_name AS VARCHAR ( 50 ) ) AS target_database, COUNT(*) AS sample_count FROM sample_header INNER JOIN sampling_options ON sampling_options.sampling_id = sample_header.sampling_id GROUP BY sampling_id, connection_type, target_database ORDER BY target_database, connection_type; sampling_id connection_type target_database sample_count ----------- --------------- -------------------------------------------------- ------------ 3 DSN Accounting 27132 6 DSN Inventory17 458
-- Average rates over several successive samples are sometimes just as interesting as peak rates -- which may only be sustained for one or two samples. -- The following examples show how to calculate the average rates over 10 samples of three different -- sample_detail columns: interval_DiskRead, interval_DiskWrite and interval_IndAdd, with the top 10 -- values shown in each case. -- The WINDOW ten_rows allows the calculation of a rolling AVG() over ten samples. You may want to -- experiment with different window sizes; e.g., 5 samples: ROWS BETWEEN 4 PRECEDING AND CURRENT ROW. -- Note that the sample_detail view does not contain rate columns (no "rate_DiskRead") so calculations -- like "interval_DiskRead * 1000.0 / interval_msec" must be performed in your adhoc query. -- Note also that you can copy the "Recorded At" values into the Go to: field on the History page. SELECT TOP 10 LAST_VALUE ( sample_detail.sample_recorded_at ) OVER ten_rows AS "Recorded At", CAST ( AVG ( sample_detail.interval_DiskRead * 1000.0 / sample_detail.interval_msec ) OVER ten_rows AS BIGINT ) AS "Avg Disk Reads/s" FROM sample_detail WHERE COALESCE ( sample_detail.interval_msec, 0 ) > 0 WINDOW ten_rows AS ( ORDER BY sample_detail.sample_recorded_at ROWS BETWEEN 9 PRECEDING AND CURRENT ROW ) ORDER BY "Avg Disk Reads/s" DESC; Recorded At Avg Disk Reads/s ----------------------- -------------------- 2018-12-05 04:18:39.333 3187 2018-12-05 04:18:29.442 3103 2018-12-05 04:18:49.240 2826 2018-12-05 04:18:19.520 2776 2018-12-05 04:18:09.583 2512 2018-12-05 04:18:59.130 2459 2018-12-05 04:17:59.676 2227 2018-12-05 04:19:09.068 2084 2018-12-05 04:17:49.739 1852 2018-12-05 04:19:18.959 1716 SELECT TOP 10 LAST_VALUE ( sample_detail.sample_recorded_at ) OVER ten_rows AS "Recorded At", CAST ( AVG ( sample_detail.interval_DiskWrite * 1000.0 / sample_detail.interval_msec ) OVER ten_rows AS BIGINT ) AS "Avg Disk Writes/s" FROM sample_detail WHERE COALESCE ( sample_detail.interval_msec, 0 ) > 0 WINDOW ten_rows AS ( ORDER BY sample_detail.sample_recorded_at ROWS BETWEEN 9 PRECEDING AND CURRENT ROW ) ORDER BY "Avg Disk Writes/s" DESC; Recorded At Avg Disk Writes/s ----------------------- -------------------- 2018-12-05 09:41:33.520 104 2018-12-05 09:40:53.800 103 2018-12-05 09:40:43.878 103 2018-12-05 09:41:23.738 103 2018-12-05 09:40:34.191 103 2018-12-05 09:41:03.722 103 2018-12-05 09:41:15.254 102 2018-12-05 09:40:14.722 102 2018-12-05 09:40:24.003 102 2018-12-05 09:40:04.378 102 -- Note that large IndAdd values may be the result of UPDATE statements that modify index columns. SELECT TOP 10 LAST_VALUE ( sample_detail.sample_recorded_at ) OVER ten_rows AS "Recorded At", CAST ( AVG ( sample_detail.interval_IndAdd * 1000.0 / sample_detail.interval_msec ) OVER ten_rows AS BIGINT ) AS "Avg Index Adds/s" FROM sample_detail WHERE COALESCE ( sample_detail.interval_msec, 0 ) > 0 WINDOW ten_rows AS ( ORDER BY sample_detail.sample_recorded_at ROWS BETWEEN 9 PRECEDING AND CURRENT ROW ) ORDER BY "Avg Index Adds/s" DESC; Recorded At Avg Index Adds/s ----------------------- -------------------- 2018-12-05 09:40:43.878 179868 2018-12-05 09:40:24.003 179063 2018-12-05 09:40:34.191 179063 2018-12-05 09:40:53.800 177794 2018-12-05 09:40:14.722 171822 2018-12-05 09:40:04.378 149262 2018-12-05 09:41:03.722 141270 2018-12-05 09:39:55.065 125124 2018-12-05 09:41:15.254 114842 2018-12-05 09:39:44.471 97237
SELECT TOP 100 * FROM purge_run ORDER BY run_number DESC;
-- This is a template SELECT that makes all the server and database-level columns -- available for the most recent 10 samples in each target database, but excludes -- the connection-level details. -- -- The "eyecatcher" columns HEADER: and DETAIL: are included to make your life a -- little easer when scrolling from side to side through so many columns. SELECT CAST ( rroad_f_limit_string_length_with_ellipsis ( STRING ( sampling_options.sampling_id, ' - ', sampling_options.selected_name, IF sampling_options.selected_tab = '1' THEN ' (DSN)' ELSE '' END IF ), 30, 'center' ) AS VARCHAR ( 30 ) ) AS "Target Database", sample_header.row_order AS "Recent Row Number", ' HEADER:', sample_header.*, ' DETAIL:', sample_detail.* FROM sampling_options INNER JOIN ( SELECT *, RANK() OVER sample_window AS row_order FROM sample_header WINDOW sample_window AS ( PARTITION BY sampling_id ORDER BY sample_set_number DESC ) ) AS sample_header ON sample_header.sampling_id = sampling_options.sampling_id INNER JOIN sample_detail ON sample_detail.sample_set_number = sample_header.sample_set_number WHERE sample_header.row_order <= 10 ORDER BY sampling_options.selected_name, sample_header.sample_set_number DESC;
-- This is a template SELECT that makes all the server, database and connection-level -- columns available for the most recent 10 samples in each target database. -- Note that the sample_header and sample_detail columns (and the Recent Row Number values) -- are REPEATED for every sample_connection row... so this is very much JUST a template :) -- -- The "eyecatcher" columns HEADER:, DETAIL: and CONNECTION: are included to make your -- life a little easer when scrolling from side to side through so many columns. SELECT CAST ( rroad_f_limit_string_length_with_ellipsis ( STRING ( sampling_options.sampling_id, ' - ', sampling_options.selected_name, IF sampling_options.selected_tab = '1' THEN ' (DSN)' ELSE '' END IF ), 30, 'center' ) AS VARCHAR ( 30 ) ) AS "Target Database", sample_header.row_order AS "Recent Row Number", ' HEADER:', sample_header.*, ' DETAIL:', sample_detail.*, ' CONNECTION:', sample_connection.* FROM sampling_options INNER JOIN ( SELECT *, RANK() OVER sample_window AS row_order FROM sample_header WINDOW sample_window AS ( PARTITION BY sampling_id ORDER BY sample_set_number DESC ) ) AS sample_header ON sample_header.sampling_id = sampling_options.sampling_id INNER JOIN sample_detail ON sample_detail.sample_set_number = sample_header.sample_set_number LEFT OUTER JOIN sample_connection ON sample_connection.sample_set_number = sample_detail.sample_set_number WHERE sample_header.row_order <= 10 ORDER BY sampling_options.selected_name, sample_header.sample_set_number DESC, sample_connection.connection_number;
-- The WINDOW clause partitions the sample_header rows by target database, -- sorts the most recent rows to the top and then numbers those rows -- so the WHERE clause can select the first 2 from each partition. -- Note that the final ORDER BY is alphabetical by selected_name rather than -- using the "Target Database" which includes the sampling_id. SELECT CAST ( rroad_f_limit_string_length_with_ellipsis ( STRING ( sampling_options.sampling_id, ' - ', sampling_options.selected_name, IF sampling_options.selected_tab = '1' THEN ' (DSN)' ELSE '' END IF ), 30, 'center' ) AS VARCHAR ( 30 ) ) AS "Target Database", sample_header.row_order AS "Recent Row Number", sample_header.sampling_id, sample_header.sample_set_number, sample_header.sample_finished_at FROM sampling_options INNER JOIN ( SELECT *, RANK() OVER sample_window AS row_order FROM sample_header WINDOW sample_window AS ( PARTITION BY sampling_id ORDER BY sample_set_number DESC ) ) AS sample_header ON sample_header.sampling_id = sampling_options.sampling_id WHERE sample_header.row_order <= 2 ORDER BY sampling_options.selected_name, sample_header.sample_set_number DESC; Target Database Recent Row Number sampling_id sample_set_number sample_finished_at ------------------------------ ----------------- ----------- -------------------- ----------------------- 1 - Accounting (DSN) 1 1 1180 2018-12-08 10:07:39.716 1 - Accounting (DSN) 2 1 1176 2018-12-08 10:07:29.805 5 - inventory17 (offline) 1 5 1087 2018-12-08 10:03:32.256 5 - inventory17 (offline) 2 5 1086 2018-12-08 10:03:32.256 2 - Payroll (DSN) 1 2 1177 2018-12-08 10:07:29.992 2 - Payroll (DSN) 2 2 1173 2018-12-08 10:07:20.097 3 - Receiving (DSN) 1 3 1178 2018-12-08 10:07:30.302 3 - Receiving (DSN) 2 3 1174 2018-12-08 10:07:20.440 4 - Shipping (DSN) 1 4 1179 2018-12-08 10:07:35.629 4 - Shipping (DSN) 2 4 1175 2018-12-08 10:07:25.720
WITH averages AS ( SELECT sample_detail.sampling_id, sampling_options.selected_name AS target_database, sample_detail.sample_set_number, sample_detail.sample_recorded_at, ( sample_detail.interval_msec / 1000.0 ) AS interval_sec, ( sample_detail.canarian_query_elapsed_msec / 1000.0 ) AS heartbeat_sec, ( sample_detail.sample_elapsed_msec / 1000.0 ) AS sample_sec, ( sample_detail.successful_ping_elapsed_msec / 1000.0 ) AS ping_sec, sample_detail.interval_Req / interval_sec AS requests_per_sec, ( sample_detail.interval_BytesReceived + sample_detail.interval_BytesSent ) / interval_sec AS bytes_per_sec, sample_detail.interval_Commit / interval_sec AS commits_per_sec, CAST ( AVG ( heartbeat_sec ) OVER moving_window AS DECIMAL ( 11, 2 ) ) AS avg_heartbeat_sec, CAST ( AVG ( sample_sec ) OVER moving_window AS DECIMAL ( 11, 2 ) ) AS avg_sample_sec, CAST ( AVG ( ping_sec ) OVER moving_window AS DECIMAL ( 11, 2 ) ) AS avg_ping_sec, CAST ( AVG ( requests_per_sec ) OVER moving_window AS DECIMAL ( 11, 2 ) ) AS avg_requests_per_sec, CAST ( AVG ( bytes_per_sec ) OVER moving_window AS DECIMAL ( 11, 2 ) ) AS avg_bytes_per_sec, CAST ( AVG ( commits_per_sec ) OVER moving_window AS DECIMAL ( 11, 2 ) ) AS avg_commits_per_sec FROM sampling_options INNER JOIN sample_detail ON sample_detail.sampling_id = sampling_options.sampling_id WHERE sample_detail.sample_recorded_at BETWEEN 'Dec 7 7:00:00 AM' AND 'Dec 7 3:56:07 PM' -- EDIT THIS AND sample_detail.sampling_id = 3 -- EDIT THIS AND sample_detail.interval_msec > 0 AND sample_detail.sample_lost = 'N' WINDOW moving_window AS ( ORDER BY sample_detail.sample_set_number DESC ROWS BETWEEN CURRENT ROW AND 99 FOLLOWING ) ) -- Note: This example doesn't SELECT all the columns in averages. SELECT TOP 10 averages.sampling_id, averages.sample_set_number, averages.avg_heartbeat_sec, averages.avg_sample_sec, averages.avg_ping_sec, averages.avg_requests_per_sec, averages.avg_bytes_per_sec, averages.avg_commits_per_sec FROM averages ORDER BY averages.sample_set_number DESC; sampling_id sample_set_number avg_heartbeat_sec avg_sample_sec avg_ping_sec avg_requests_per_sec avg_bytes_per_sec avg_commits_per_sec ----------- -------------------- ----------------- -------------- ------------- -------------------- ----------------- ------------------- 3 2507606 0.30 2.86 0.22 333.62 330563.59 29.69 3 2507601 0.30 2.84 0.23 335.20 333073.81 29.76 3 2507592 0.30 2.83 0.20 334.56 325784.20 29.54 3 2507584 0.28 2.81 0.18 334.64 322260.53 29.50 3 2507576 0.28 2.81 0.18 333.02 321284.52 29.36 3 2507567 0.28 2.80 0.18 334.06 311788.19 29.29 3 2507559 0.29 2.80 0.18 334.66 312784.67 29.27 3 2507551 0.29 2.79 0.17 335.55 312595.48 29.37 3 2507543 0.28 2.79 0.17 334.35 318440.75 29.30 3 2507535 0.28 2.78 0.17 333.20 324179.88 29.28
-- Response time volatility measures how much the interval, heartbeat, sample and ping times -- varied up and down in one sampling interval. -- Connection number churn is the total number of external client connections that were created -- and dropped in one sampling interval. If the same connection is created and dropped in the same -- interval it is counted as 2. -- An external client connection has a connection number in the range 1 to 999,999,999. -- The interval time is a measurement of response time even though Foxhound displays it separately -- from other response time numbers; i.e, the interval time is supposed to be 10 seconds or close to it. -- High numbers for volatility and churn may indicate unusual application behavior where frequent -- bursts of activity and short-lived connections are causing performance problems that are not -- reflected in other performance statistics displayed by Foxhound. -- The WITH maxconn view returns the highest-numbered client connection in each sample. -- The WITH diffconn view returns the change in the highest client connection number for each sample. -- The WITH connrate view calculates the volatility and churn rates for each sample. -- Note: This query might work best for continuous unbroken ranges of successful Foxhound samples, -- with no "Sampling stopped" or other gaps. WITH maxconn AS ( -- The BETWEEN 1 AND 999999999 predicate includes external client connections and excludes -- high-numbered internal, event and web service connections. -- The GROUP BY and MAX are used to determing the highest-numbered connection in each sample. SELECT sample_connection.sampling_id AS sampling_id, sample_connection.sample_set_number AS sample_set_number, COUNT(*) AS client_connection_count, MAX ( sample_connection.connection_number ) AS max_client_connection_number FROM sample_connection WHERE sample_connection.connection_number BETWEEN 1 AND 999999999 GROUP BY sample_connection.sampling_id, sample_connection.sample_set_number ), diffconn AS ( -- The WINDOW moving_pair contains the previous and current samples. SELECT maxconn.sampling_id AS sampling_id, maxconn.sample_set_number AS this_sample_set_number, maxconn.client_connection_count AS this_client_connection_count, maxconn.max_client_connection_number AS this_max_client_connection_number, FIRST_VALUE ( maxconn.sample_set_number ) OVER moving_pair AS prev_sample_set_number, FIRST_VALUE ( maxconn.client_connection_count ) OVER moving_pair AS prev_client_connection_count, FIRST_VALUE ( maxconn.max_client_connection_number ) OVER moving_pair AS prev_max_client_connection_number, this_max_client_connection_number - prev_max_client_connection_number AS interval_max_connection_number_increase FROM maxconn WINDOW moving_pair AS ( ORDER BY maxconn.sample_set_number ROWS BETWEEN 1 PRECEDING AND CURRENT ROW ) ), connrate AS ( SELECT diffconn.sampling_id AS sampling_id, diffconn.this_sample_set_number AS this_sample_set_number, this_sample_detail.sample_recorded_at AS this_sample_recorded_at, diffconn.this_client_connection_count AS this_client_connection_count, diffconn.this_max_client_connection_number AS this_max_client_connection_number, this_sample_detail.interval_msec AS this_interval_msec, this_sample_detail.canarian_query_elapsed_msec AS this_canarian_query_elapsed_msec, this_sample_detail.sample_elapsed_msec AS this_sample_elapsed_msec, this_sample_detail.successful_ping_elapsed_msec AS this_successful_ping_elapsed_msec, diffconn.prev_sample_set_number AS prev_sample_set_number, diffconn.prev_client_connection_count AS prev_client_connection_count, diffconn.prev_max_client_connection_number AS prev_max_client_connection_number, prev_sample_detail.interval_msec AS prev_interval_msec, prev_sample_detail.canarian_query_elapsed_msec AS prev_canarian_query_elapsed_msec, prev_sample_detail.sample_elapsed_msec AS prev_sample_elapsed_msec, prev_sample_detail.successful_ping_elapsed_msec AS prev_successful_ping_elapsed_msec, ABS ( this_sample_detail.interval_msec - prev_sample_detail.interval_msec ) AS interval_volatility, ABS ( this_sample_detail.canarian_query_elapsed_msec - prev_sample_detail.canarian_query_elapsed_msec ) AS heartbeat_volatility, ABS ( this_sample_detail.sample_elapsed_msec - prev_sample_detail.sample_elapsed_msec ) AS sample_time_volatility, ABS ( this_sample_detail.successful_ping_elapsed_msec - prev_sample_detail.successful_ping_elapsed_msec ) AS ping_volatility, -- The connection numbers that were added (interval_max_client_connection_number_increase) -- = the largest client connection number that exists now (this_max_client_connection_number) -- - the largest client connection number used to exist (prev_max_client_connection_number) -- or zero, whichever is larger (the value in this_max_client_connection_number may be too small). GREATER ( diffconn.this_max_client_connection_number - diffconn.prev_max_client_connection_number, 0 ) AS interval_max_client_connection_number_increase, -- The connection rate (client_connection_rate) is the hourly rate at which client connection -- numbers. The rate may vary greatly because the interval is very short (only a few seconds) -- and the rate is calculated for a much longer period (an hour). Short high-rate bursts may -- not be significant but sustained high rates probably are. CAST ( ( CAST ( interval_max_client_connection_number_increase AS DECIMAL ( 20, 6 ) ) / CAST ( this_sample_detail.interval_msec AS DECIMAL ( 20, 6 ) ) ) * ( 1000.0 * 60.0 * 60.0 ) AS INTEGER ) AS client_connection_rate, -- The number of connections that were dropped (interval_client_disconnection_count) -- = the connection count that used to exist (prev_client_connection_count -- + the connection numbers that were added (interval_max_client_connection_number_increase) -- - the connection count that exists now (this_client_connection_count). diffconn.prev_client_connection_count + interval_max_client_connection_number_increase - diffconn.this_client_connection_count AS interval_client_disconnection_count, -- The churn (connection_churm) -- includes both the connection numbers that were added (interval_max_client_connection_number_increase) -- plus the number of connections that were dropped (interval_client_disconnection_count). interval_max_client_connection_number_increase + interval_client_disconnection_count AS connection_churn FROM diffconn INNER JOIN sample_detail AS this_sample_detail ON this_sample_detail.sample_set_number = diffconn.this_sample_set_number INNER JOIN sample_detail AS prev_sample_detail ON prev_sample_detail.sample_set_number = diffconn.prev_sample_set_number ) SELECT [ see examples below ]; ----------------------------------------------------------------------------------------- -- Example 1 : Top 10 heartbeat volatility. -- The sample_set_number BETWEEN predicate specifies a continuous range of successful -- Foxhound samples, with no "Sampling stopped" or other gaps. -- Note: This example doesn't SELECT all the columns in connrate. SELECT TOP 10 CAST ( connrate.this_sample_set_number AS INTEGER ) AS "Sample Set", connrate.this_sample_recorded_at AS "Recorded At", CAST ( connrate.interval_volatility AS INTEGER ) AS "Interval Volatility", CAST ( connrate.heartbeat_volatility AS INTEGER ) AS "Heartbeat Volatility", CAST ( connrate.sample_time_volatility AS INTEGER ) AS "Sample Time Volatility", CAST ( connrate.ping_volatility AS INTEGER ) AS "Ping Volatility", CAST ( connrate.client_connection_rate AS INTEGER ) AS "Connections Per Hour", CAST ( connrate.connection_churn AS INTEGER ) AS "Connection Churn" FROM connrate WHERE connrate.sampling_id = 3 AND connrate.this_sample_set_number BETWEEN 2481232 AND 2507618 ORDER BY "Heartbeat Volatility" DESC; Sample Set Recorded At Interval Volatility Heartbeat Volatility Sample Time Volatility Ping Volatility Connections Per Hour Connection Churn ----------- ----------------------- ------------------- -------------------- ---------------------- --------------- -------------------- ---------------- 2491738 2018-12-07 10:35:06.069 468 4047 2047 16 36477 267 2491743 2018-12-07 10:35:09.100 10391 4016 2407 16 39194 67 2486362 2018-12-07 08:46:17.425 9141 2907 48 94 192867 654 2486356 2018-12-07 08:46:11.284 6173 2891 1344 16 77974 654 2487646 2018-12-07 09:12:38.526 16999 2579 5968 2469 23213 295 2491563 2018-12-07 10:31:34.628 16 2578 625 16 31640 210 2506286 2018-12-07 15:29:10.528 6813 2531 1156 579 73851 287 2495142 2018-12-07 11:43:12.154 2546 2485 31 516 65991 230 2502128 2018-12-07 14:05:41.531 15109 2437 8124 15 25536 356 2502138 2018-12-07 14:05:46.938 19827 2406 5171 16 137155 413 ----------------------------------------------------------------------------------------- -- Example 2 : Top 10 connection churn. -- The sample_set_number BETWEEN predicate specifies a continuous range of successful -- Foxhound samples, with no "Sampling stopped" or other gaps. -- Note: This example doesn't SELECT all the columns in connrate. -- Note: Connection Rate is hourly, whereas Connection Churn is measured for one sample interval. SELECT TOP 10 CAST ( connrate.this_sample_set_number AS INTEGER ) AS "Sample Set", connrate.this_sample_recorded_at AS "Recorded At", CAST ( connrate.interval_volatility AS INTEGER ) AS "Interval Volatility", CAST ( connrate.heartbeat_volatility AS INTEGER ) AS "Heartbeat Volatility", CAST ( connrate.sample_time_volatility AS INTEGER ) AS "Sample Time Volatility", CAST ( connrate.ping_volatility AS INTEGER ) AS "Ping Volatility", CAST ( connrate.client_connection_rate AS INTEGER ) AS "Connections Per Hour", CAST ( connrate.connection_churn AS INTEGER ) AS "Connection Churn" FROM connrate WHERE connrate.sampling_id = 3 AND connrate.this_sample_set_number BETWEEN 2481232 AND 2507618 ORDER BY "Connection Churn" DESC; Sample Set Recorded At Interval Volatility Heartbeat Volatility Sample Time Volatility Ping Volatility Connections Per Hour Connection Churn ----------- ----------------------- ------------------- -------------------- ---------------------- --------------- -------------------- ---------------- 2484787 2018-12-07 08:14:06.662 437 0 47 16 204359 1146 2487124 2018-12-07 09:01:51.548 2359 16 688 16 171773 1090 2483301 2018-12-07 07:43:21.009 500 0 46 0 201502 1087 2506421 2018-12-07 15:31:52.655 1031 15 1031 1 241067 1084 2483309 2018-12-07 07:43:31.619 891 0 0 1 180508 1057 2483406 2018-12-07 07:45:32.620 1688 672 157 563 149429 1031 2500085 2018-12-07 13:23:47.369 6563 0 2954 1 136770 1007 2496220 2018-12-07 12:04:43.361 187 313 47 15 171675 994 2504826 2018-12-07 14:59:36.814 422 94 1484 890 167595 966 2501625 2018-12-07 13:55:10.163 3406 391 47 0 207559 964
-- Samples selected by these [sampling_id,sample_set_number] key pairs shown at the top -- of the Samples and Messages section of the Sample History page: -- -- [1,438407] -- [1,438406] -- [1,1497] -- [1,89] -- [1,88] SELECT sample_detail.* FROM sample_detail WHERE sample_detail.sampling_id = 1 AND sample_detail.sample_set_number IN ( 88, 89, 1497, 438406, 438407 ) ORDER BY sample_detail.sample_set_number DESC;
-- Samples selected by these [sampling_id,sample_set_number] key pairs shown at the top -- of the Samples and Messages section of the Sample History page: -- -- [1,438407] -- [1,438406] -- [1,1497] -- [1,89] -- [1,88] SELECT IF sampling_options.selected_tab = 1 THEN 'DSN' ELSE 'String' END IF AS connection_type, sampling_options.selected_name AS target_database, ' HEADER:', sample_header.*, ' DETAIL:', sample_detail.*, ' CONNECTION:', sample_connection.* FROM sampling_options INNER JOIN sample_header ON sample_header.sampling_id = sampling_options.sampling_id INNER JOIN sample_detail ON sample_detail.sample_set_number = sample_header.sample_set_number LEFT OUTER JOIN sample_connection ON sample_connection.sample_set_number = sample_detail.sample_set_number WHERE sample_header.sampling_id = 1 AND sample_header.sample_set_number IN ( 88, 89, 1497, 438406, 438407 ) ORDER BY sample_header.sample_set_number DESC, sample_connection.connection_number;
SELECT CAST ( rroad_f_limit_string_length_with_ellipsis ( STRING ( sampling_options.sampling_id, ' - ', sampling_options.selected_name, IF sampling_options.selected_tab = '1' THEN ' (DSN)' ELSE '' END IF ), 40, 'center' ) AS VARCHAR ( 40 ) ) AS "Target Database", sampling_should_be_running AS "Sampling Should Be Running", CAST ( rroad_f_limit_string_length_with_ellipsis ( connection_status_message, 40, 'right' ) AS VARCHAR ( 40 ) ) AS "Connection Status Message", last_sample_finished_at AS "Last Sample At" FROM sampling_options ORDER BY sampling_options.sampling_id; Target Database Sampling Should Be Running Connection Status Message Last Sample At ---------------------------------------- -------------------------- ---------------------------------------- ----------------------- 1 - Accounting (DSN) Y Sampling OK 2018-12-08 09:36:47.384 2 - Payroll (DSN) Y Sampling OK 2018-12-08 09:36:47.636 3 - Receiving (DSN) Y Sampling OK 2018-12-08 09:36:47.804 4 - Shipping (DSN) Y Sampling OK 2018-12-08 09:36:53.107 5 - inventory17 (offline) N (offline) 2018-12-08 09:33:17.938
SELECT TOP 10 rroad_f_number_with_commas ( purge_run.run_number ) AS "Run Number", DATEFORMAT ( purge_run.purge_started_at_timestamp, 'YYYY-MM-DD HH:NN:SS' ) AS "Started", rroad_f_msecs_as_abbreviated_d_h_m_s_ms ( purge_run.elapsed_msec ) AS "Run Time", STRING ( purge_run.progress, ' / ', purge_run.status ) AS "Progress / Status", rroad_f_number_with_commas ( purge_run.deleted_sample_header + purge_run.deleted_sample_detail + purge_run.deleted_sample_connection + purge_run.deleted_ping_log + purge_run.deleted_exception + purge_run.deleted_exception_dump + purge_run.deleted_purge_run + purge_run.deleted_alert + purge_run.deleted_all_clear + purge_run.deleted_alert_cancelled + purge_run.deleted_email_failure + purge_run.deleted_autodropped_connection + purge_run.deleted_server_message + purge_run.deleted_long_varchar ) AS "Rows Deleted", IF purge_run.is_complete = 'Y' THEN DATEFORMAT ( purge_run.data_deletion_up_to_timestamp, 'YYYY-MM-DD HH:NN:SS' ) ELSE '-' ENDIF AS "Deleted Up To", rroad_f_bytes_as_kmg ( purge_run.foxhound_file_size * purge_run.foxhound_page_size ) AS "File", IF purge_run.foxhound_file_size <= 0 THEN '' ELSE STRING ( CAST ( ROUND ( CAST ( purge_run.foxhound_free_pages AS DECIMAL ( 30, 2 ) ) / CAST ( purge_run.foxhound_file_size AS DECIMAL ( 30, 2 ) ) * 100.0, 0 ) AS BIGINT ), '%' ) END IF AS "Free", rroad_f_number_with_commas ( purge_run.foxhound_fragments ) AS "Frags" FROM purge_run ORDER BY purge_run.run_number DESC;
SELECT CAST ( AVG ( interval_msec / 1000.0 ) AS DECIMAL ( 10, 2 ) ) AS "Interval sec", CAST ( AVG ( sample_elapsed_msec ) AS DECIMAL ( 10, 2 ) ) AS "Sample msec", CAST ( AVG ( canarian_query_elapsed_msec ) AS DECIMAL ( 10, 2 ) ) AS "Heartbeat msec", CAST ( AVG ( ActiveReq ) AS DECIMAL ( 10, 2 ) ) AS "ActiveReq", CAST ( AVG ( UnschReq ) AS DECIMAL ( 10, 2 ) ) AS "UnschReq", CAST ( AVG ( ConnCount ) AS DECIMAL ( 10, 2 ) ) AS "Conns", CAST ( AVG ( executing_connection_count ) AS DECIMAL ( 10, 2 ) ) AS "Executing", CAST ( AVG ( idle_connection_count ) AS DECIMAL ( 10, 2 ) ) AS "Idle", CAST ( AVG ( waiting_connection_count ) AS DECIMAL ( 10, 2 ) ) AS "Waiting", CAST ( AVG ( total_blocked_connection_count ) AS DECIMAL ( 10, 2 ) ) AS "Blocked" FROM sample_detail WHERE sample_detail.sample_recorded_at BETWEEN '2019-11-05 1:02:56 PM' AND '2019-11-06 4:09:50 AM' -- EDIT THIS AND sample_detail.sampling_id = 1 -- EDIT THIS AND sample_detail.interval_msec > 0 AND sample_detail.sample_lost = 'N'; SELECT CAST ( AVG ( 1000.0 * ( interval_Req ) / sample_detail.interval_msec ) AS DECIMAL ( 10, 2 ) ) AS "Req/s", CAST ( AVG ( 1000.0 * ( interval_Commit ) / sample_detail.interval_msec ) AS DECIMAL ( 10, 2 ) ) AS "Commits/s", CAST ( AVG ( 1000.0 * ( ( interval_BytesReceived + interval_BytesSent ) / sample_detail.page_size ) / sample_detail.interval_msec ) AS DECIMAL ( 10, 2 ) ) AS "Bytes k/s", CAST ( AVG ( total_waiting_time ) AS DECIMAL ( 10, 2 ) ) AS "Total Waiting", CAST ( AVG ( interval_waiting_time ) AS DECIMAL ( 10, 2 ) ) AS "Interval Waiting", CAST ( AVG ( 1000.0 * ( interval_DiskRead + interval_DiskWrite ) / sample_detail.interval_msec ) AS DECIMAL ( 10, 2 ) ) AS "DiskIO/s", CAST ( AVG ( interval_FullCompare ) AS DECIMAL ( 10, 2 ) ) AS "FullCompare", CAST ( AVG ( 1000.0 * ( interval_IndAdd + interval_IndLookup ) / sample_detail.interval_msec ) AS DECIMAL ( 10, 2 ) ) AS "IndexIO/s", CAST ( AVG ( 1000.0 * ( interval_LogWrite ) / sample_detail.interval_msec ) AS DECIMAL ( 10, 2 ) ) AS "LogIO/s", CAST ( AVG ( ROUND ( sample_detail.interval_ProcessCPU * 1000.0 / ( sample_detail.interval_msec * CAST ( sample_detail.CPU_count AS DECIMAL ( 30, 6 ) ) ) * 100.0, 1 ) ) AS DECIMAL ( 10, 2 ) ) AS "CPU %" FROM sample_detail WHERE sample_detail.sample_recorded_at BETWEEN '2019-11-05 1:02:56 PM' AND '2019-11-06 4:09:50 AM' -- EDIT THIS AND sample_detail.sampling_id = 1 -- EDIT THIS AND sample_detail.interval_msec > 0 AND sample_detail.sample_lost = 'N'; Interval sec Sample msec Heartbeat msec ActiveReq UnschReq Conns Executing Idle Waiting Blocked ------------ ------------ -------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ 9.90 770.05 8.55 1.59 0.13 100.51 2.46 100.08 0.01 0.00 Req/s Commits/s Bytes k/s Total Waiting Interval Waiting DiskIO/s FullCompare IndexIO/s LogIO/s CPU % ------------ ------------ ------------ ------------- ---------------- ------------ ------------ ------------ ------------ ------------ 351.79 6.27 45.37 179.39 0.11 3.91 0.00 935.39 0.00 13.53
-- Specify the sampling_id values for at least the partner 1 and partner 2 databases in the WHERE clause; -- the arbiter and copy databases are optional: -- -- WHERE HA_messages.sampling_id IN ( 7, 8, 9, 10 ) -- arbiter, partner 1, partner 2, copy targets -- Suggestion: Use Foxhound sampling sessions that are directly connected to the partner 1 and -- partner 2 databases, rather than to the primary and secondary (mirror) databases, -- because that will make the output easier to read. -- I.e., the server column will continue to match the physical server while the -- disposition column will show how the roles change after failovers. -- Note: The disposition and server columns are sometimes empty because of limitations in Foxhound. -- Caution: This query WILL NOT WORK PROPERLY if any of the target databases are running on the same -- SQL Anywhere server. It WILL work properly if the SQL Anywhere servers are running on -- the same computer, as long as the databases are on different SQL Anywhere servers. -- However, different computers are recommended in production, for maximum availability. WITH HA_messages AS ( SELECT CAST ( server_message.sampling_id AS INTEGER ) AS sampling_id, CAST ( server_message.locator_number AS INTEGER ) AS locator, server_message.msg_id AS msg_id, server_message.msg_time AS msg_time, LEFT ( sampling_options.selected_name, 15 ) AS target, COALESCE ( CASE WHEN sample_detail.MirrorState = '' AND sample_detail.ReadOnly = 'On' THEN '' -- non-HA read-only WHEN sample_detail.MirrorState = '' AND sample_detail.ReadOnly = 'Off' THEN '' -- updatable or arbiter WHEN sample_detail.MirrorState <> '' AND sample_detail.ReadOnly = 'On' AND sample_detail.ArbiterState = '' AND sample_detail.PartnerState = '' THEN 'Copy' WHEN sample_detail.MirrorState <> '' AND sample_detail.ReadOnly = 'On' THEN 'Secondary' WHEN sample_detail.MirrorState <> '' AND sample_detail.ReadOnly = 'Off' THEN 'Primary' ELSE '' END CASE, '' ) AS disposition, LEFT ( COALESCE ( sample_detail.ServerName, '' ), 15 ) AS server, LEFT ( IF server_message.msg_severity = 'INFO' THEN ' INFO' ELSE server_message.msg_severity END IF, 10 ) AS severity, LEFT ( LTRIM ( server_message.msg_text ), 120 ) AS msg_text FROM server_message INNER JOIN sampling_options ON sampling_options.sampling_id = server_message.sampling_id LEFT OUTER JOIN sample_detail ON sample_detail.sample_set_number = server_message.locator_number WHERE server_message.msg_category NOT IN ( 'CHKPT' ) AND ( server_message.msg_text LIKE '%mirror%' -- language EN OR server_message.msg_text LIKE '%Database server started at%' -- language EN OR server_message.msg_text LIKE '%Database%started as%' -- language EN OR server_message.msg_text LIKE '%Starting database%' -- language EN OR server_message.msg_severity IN ( 'ERR', 'WARN' ) ) UNION ALL SELECT CAST ( alert.sampling_id AS INTEGER ) AS sampling_id, CAST ( alert.alert_occurrence AS INTEGER ) AS locator, CAST ( 0 AS UNSIGNED BIGINT ) AS msg_id, alert.alert_in_effect_at AS msg_time, LEFT ( sampling_options.selected_name, 15 ) AS target, COALESCE ( CASE WHEN sample_detail.MirrorState = '' AND sample_detail.ReadOnly = 'On' THEN '' -- non-HA read-only WHEN sample_detail.MirrorState = '' AND sample_detail.ReadOnly = 'Off' THEN '' -- updatable or arbiter WHEN sample_detail.MirrorState <> '' AND sample_detail.ReadOnly = 'On' AND sample_detail.ArbiterState = '' AND sample_detail.PartnerState = '' THEN 'Copy' WHEN sample_detail.MirrorState <> '' AND sample_detail.ReadOnly = 'On' THEN 'Secondary' WHEN sample_detail.MirrorState <> '' AND sample_detail.ReadOnly = 'Off' THEN 'Primary' ELSE '' END CASE, '' ) AS disposition, LEFT ( COALESCE ( sample_detail.ServerName, '' ), 15 ) AS server, IF alert.alert_is_clear_or_cancelled = 'Y' THEN '[Alert]' ELSE 'Alert' END IF AS severity, LEFT ( LTRIM ( STRING ( '#', alert.alert_number, ' ', alert_title.alert_title ) ), 120 ) AS msg_text FROM alert INNER JOIN alert_title ON alert_title.alert_number = alert.alert_number INNER JOIN sampling_options ON sampling_options.sampling_id = alert.sampling_id LEFT OUTER JOIN sample_detail ON sample_detail.sample_set_number = alert.sample_set_number UNION ALL SELECT CAST ( all_clear.sampling_id AS INTEGER ) AS sampling_id, CAST ( all_clear.all_clear_occurrence AS INTEGER ) AS locator, CAST ( 0 AS UNSIGNED BIGINT ) AS msg_id, all_clear.alert_all_clear_at AS msg_time, LEFT ( sampling_options.selected_name, 15 ) AS target, COALESCE ( CASE WHEN sample_detail.MirrorState = '' AND sample_detail.ReadOnly = 'On' THEN '' -- non-HA read-only WHEN sample_detail.MirrorState = '' AND sample_detail.ReadOnly = 'Off' THEN '' -- updatable or arbiter WHEN sample_detail.MirrorState <> '' AND sample_detail.ReadOnly = 'On' AND sample_detail.ArbiterState = '' AND sample_detail.PartnerState = '' THEN 'Copy' WHEN sample_detail.MirrorState <> '' AND sample_detail.ReadOnly = 'On' THEN 'Secondary' WHEN sample_detail.MirrorState <> '' AND sample_detail.ReadOnly = 'Off' THEN 'Primary' ELSE '' END CASE, '' ) AS disposition, LEFT ( COALESCE ( sample_detail.ServerName, '' ), 15 ) AS server, 'All Clear' AS severity, LEFT ( LTRIM ( STRING ( '#', all_clear.alert_number, ' ', alert_title.alert_title ) ), 120 ) AS msg_text FROM all_clear INNER JOIN alert_title ON alert_title.alert_number = all_clear.alert_number INNER JOIN sampling_options ON sampling_options.sampling_id = all_clear.sampling_id LEFT OUTER JOIN sample_detail ON sample_detail.sample_set_number = all_clear.sample_set_number UNION ALL SELECT CAST ( alert_cancelled.sampling_id AS INTEGER ) AS sampling_id, CAST ( alert_cancelled.alert_cancelled_occurrence AS INTEGER ) AS locator, CAST ( 0 AS UNSIGNED BIGINT ) AS msg_id, alert_cancelled.alert_all_clear_at AS msg_time, LEFT ( sampling_options.selected_name, 15 ) AS target, COALESCE ( CASE WHEN sample_detail.MirrorState = '' AND sample_detail.ReadOnly = 'On' THEN '' -- non-HA read-only WHEN sample_detail.MirrorState = '' AND sample_detail.ReadOnly = 'Off' THEN '' -- updatable or arbiter WHEN sample_detail.MirrorState <> '' AND sample_detail.ReadOnly = 'On' AND sample_detail.ArbiterState = '' AND sample_detail.PartnerState = '' THEN 'Copy' WHEN sample_detail.MirrorState <> '' AND sample_detail.ReadOnly = 'On' THEN 'Secondary' WHEN sample_detail.MirrorState <> '' AND sample_detail.ReadOnly = 'Off' THEN 'Primary' ELSE '' END CASE, '' ) AS disposition, LEFT ( COALESCE ( sample_detail.ServerName, '' ), 15 ) AS server, 'Cancelled' AS severity, LEFT ( LTRIM ( STRING ( '#', alert_cancelled.alert_number, ' ', alert_title.alert_title ) ), 120 ) AS msg_text FROM alert_cancelled INNER JOIN alert_title ON alert_title.alert_number = alert_cancelled.alert_number INNER JOIN sampling_options ON sampling_options.sampling_id = alert_cancelled.sampling_id LEFT OUTER JOIN sample_detail ON sample_detail.sample_set_number = alert_cancelled.sample_set_number ) SELECT CAST ( STRING ( HA_messages.sampling_id, ',', HA_messages.locator, ',', HA_messages.msg_id ) AS VARCHAR ( 32 ) ) AS "sampling_id,locator,msg_id", HA_messages.msg_time, HA_messages.target, HA_messages.disposition, HA_messages.server, HA_messages.severity, HA_messages.msg_text FROM HA_messages WHERE HA_messages.sampling_id IN ( 45, 46 ) -- e.g., HA primary and secondary (mirror) AND HA_messages.msg_time >= '2020-01-01 2:43:47 PM' -- optional starting point for query ORDER BY HA_messages.msg_time DESC, -- same order as Monitor and History pages HA_messages.msg_id DESC, HA_messages.sampling_id DESC, HA_messages.locator DESC; ----------------------------------------------- -- An HA failover as viewed by the logical primary and secondary (mirror) servers... -- (the partner1 server stopped while it was running as the primary). SELECT... WHERE HA_messages.sampling_id IN ( 45, 46 ) -- e.g., HA primary and secondary (mirror) sampling_id,locator,msg_id msg_time target disposition server severity msg_text -------------------------- ----------------------- --------------- ----------- --------------- ---------- ---------------------------------------------------------------------- 46,36954,0 2020-01-01 14:45:29.926 secondary_demo Primary partner2_server All Clear #31 Database updatable 45,36779,0 2020-01-01 14:44:03.875 primary_demo Primary partner2_server Alert #11 ServerName change 45,36778,0 2020-01-01 14:44:03.875 primary_demo Primary partner2_server Alert #10 Partner unreachable 46,36776,59 2020-01-01 14:43:53.591 secondary_demo Primary partner2_server INFO Database "ddd" mirroring: connected to child node "copy_server" 45,36777,59 2020-01-01 14:43:53.591 primary_demo Primary partner2_server INFO Database "ddd" mirroring: connected to child node "copy_server" 46,36752,0 2020-01-01 14:43:50.754 secondary_demo Primary partner2_server [Alert] #31 Database updatable 46,36751,0 2020-01-01 14:43:50.754 secondary_demo Primary partner2_server Alert #10 Partner unreachable 46,36750,58 2020-01-01 14:43:50.632 secondary_demo Primary partner2_server INFO Database "ddd" mirroring: now running as primary server 45,36753,58 2020-01-01 14:43:50.632 primary_demo INFO Database "ddd" mirroring: now running as primary server 46,36750,51 2020-01-01 14:43:49.778 secondary_demo Primary partner2_server INFO Database "ddd" mirroring: becoming primary server 45,36753,51 2020-01-01 14:43:49.778 primary_demo INFO Database "ddd" mirroring: becoming primary server 46,36750,50 2020-01-01 14:43:49.778 secondary_demo Primary partner2_server INFO Database "ddd" mirroring: disconnected from partner "partner1_server" 45,36753,50 2020-01-01 14:43:49.778 primary_demo INFO Database "ddd" mirroring: disconnected from partner "partner1_server" ----------------------------------------------- -- The same HA failover as viewed by the real partner1 and partner2 servers... -- (the partner1 server stopped while it was running as the primary). SELECT... WHERE HA_messages.sampling_id IN ( 43, 44 ) -- e.g., HA partner1 and partner2 sampling_id,locator,msg_id msg_time target disposition server severity msg_text -------------------------- ----------------------- --------------- ----------- --------------- ---------- ---------------------------------------------------------------------- 44,36973,0 2020-01-01 14:45:36.511 partner2_demo Primary partner2_server All Clear #31 Database updatable 43,36881,0 2020-01-01 14:44:50.883 partner1_demo Primary partner1_server Alert #1 Database unresponsive 44,36768,0 2020-01-01 14:43:57.584 partner2_demo Primary partner2_server [Alert] #31 Database updatable 44,36767,0 2020-01-01 14:43:57.584 partner2_demo Primary partner2_server Alert #10 Partner unreachable 44,36764,59 2020-01-01 14:43:53.591 partner2_demo Primary partner2_server INFO Database "ddd" mirroring: connected to child node "copy_server" 44,36764,58 2020-01-01 14:43:50.632 partner2_demo Primary partner2_server INFO Database "ddd" mirroring: now running as primary server 44,36764,51 2020-01-01 14:43:49.778 partner2_demo Primary partner2_server INFO Database "ddd" mirroring: becoming primary server 44,36764,50 2020-01-01 14:43:49.778 partner2_demo Primary partner2_server INFO Database "ddd" mirroring: disconnected from partner "partner1_server"
-- The LATERAL join selects the most recent successful sample that precedes each Alert. -- The final WHERE clause limits the result set to samples that use -- Intra-Query Parallelism; i.e., sample_detail.total_child_connection_count > 0. SELECT TOP 100 CAST ( rroad_f_limit_string_length_with_ellipsis ( STRING ( sampling_options.sampling_id, ' - ', sampling_options.selected_name, IF sampling_options.selected_tab = '1' THEN ' (DSN)' ELSE '' END IF ), 50, 'right' ) AS VARCHAR ( 50 ) ) AS "Target Database", CAST ( STRING ( alert.alert_number, ' ', alert_title.alert_title ) AS VARCHAR ( 24 ) ) AS "Alert", CAST ( alert.alert_occurrence AS INTEGER ) AS "Go to Alert", CAST ( sample_detail.sample_set_number AS INTEGER ) AS "Go to Sample", sample_detail.total_child_connection_count AS "Child Connections", CAST ( sample_detail.interval_CPU_percent AS DECIMAL ( 6, 1 ) ) AS "CPU %" FROM sampling_options, alert, alert_title, LATERAL ( SELECT TOP 1 * FROM sample_detail WHERE sample_detail.sample_lost = 'N' AND sample_detail.sampling_id = alert.sampling_id AND sample_detail.sample_set_number < alert.alert_occurrence ORDER BY sample_detail.sample_set_number DESC ) AS sample_detail WHERE alert.sampling_id = sampling_options.sampling_id AND alert.alert_number = alert_title.alert_number AND sample_detail.total_child_connection_count > 0 ORDER BY alert.alert_occurrence DESC; Target Database Alert Go to Alert Go to Sample Child Connections CPU % -------------------------------------------------- ------------------------ ----------- ------------ ----------------- -------- 6 - Inventory (DSN) 19 Cache size 1708345 1708344 1 11.2 5 - Inventory 14 Unscheduled requests 1688769 1688768 6 45.6 5 - Inventory 14 Unscheduled requests 1688698 1688697 10 37.8 5 - Inventory 4 CPU usage 1688669 1688668 4 50.3 5 - Inventory 27 Connection CPU 1688651 1688650 4 42.8 5 - Inventory 4 CPU usage 1688649 1688648 4 50.8
-- The connection_id_string is displayed on the Connection History page. It is an -- artificial identifier constructed by Foxhound to uniquely identify each target -- connection within the entire Foxhound database. -- -- Note that it is perfectly OK to place some columns at the front of the SELECT list and -- then select all of the columns again (sample_connection.*) in case you need to look at them. SELECT sample_detail.sample_recorded_at, -- some columns sample_connection.sample_set_number, sample_connection.LastPlanText, sample_connection.LastStatement, sample_connection.* -- all columns FROM sample_connection INNER JOIN sample_detail ON sample_detail.sample_set_number = sample_connection.sample_set_number WHERE sample_connection.connection_id_string = '1-1-20181208092415-003' ORDER BY sample_connection.sample_set_number DESC;
-- The connection_id_string is displayed on the Connection History page. It is an -- artificial identifier constructed by Foxhound to uniquely identify each target -- connection within the entire Foxhound database. -- -- Note that it is perfectly OK to place some columns at the front of the SELECT list and -- then select all of the columns again (sample_connection.*) in case you need to look at them. SELECT sample_detail.sample_recorded_at, -- some columns sample_connection.sample_set_number, sample_connection.LastPlanText, sample_connection.LastStatement, sample_connection.* -- all columns FROM sample_connection INNER JOIN sample_detail ON sample_detail.sample_set_number = sample_connection.sample_set_number WHERE sample_connection.connection_id_string = '1-1-20181208092415-003' AND sample_detail.sample_recorded_at <= '2018-12-08 12:48:49.906' ORDER BY sample_connection.sample_set_number DESC;
-- Sometimes it may be difficult pinpoint the exact Foxhound sample that contains data relevant to -- a performance problem. For example, in the case of a long interval between Foxhound samples, -- high connection-level CPU usage may show up in the sample before or after the sample with the long interval. -- The WITH triple view shows how a WINDOW BETWEEN 1 PRECEDING AND 1 FOLLOWING together with the FIRST_VALUE() -- and LAST_VALUE() functions may be used to gather data from three successive samples: first, mid, last. -- In this case the WITH triple view gathers three interval_sec values and three interval_CPU_percent values -- together with the three sample_set_number values... but you can pick different columns. -- The outer SELECT extends the example by limiting the query to long Foxhound intervals (triple.mid_int >= 15000). -- It also uses an INNER JOIN to find connections using a lot of CPU (sample_connection.interval_CPU_percent >= 5.0). -- Again, you can pick different columns; the basic technique is to use a WINDOW to gather samples in sets of three, -- then use an OUTER SELECT to study the data in those three samples. WITH triple AS ( SELECT sample_detail.sampling_id, CAST ( FIRST_VALUE ( sample_detail.sample_set_number ) OVER three_rows AS DECIMAL ( 9 ) ) AS first_samp, CAST ( sample_detail.sample_set_number AS DECIMAL ( 9 ) ) AS mid_samp, CAST ( LAST_VALUE ( sample_detail.sample_set_number ) OVER three_rows AS DECIMAL ( 9 ) ) AS last_samp, CAST ( FIRST_VALUE ( sample_detail.interval_msec ) OVER three_rows AS DECIMAL ( 9 ) ) AS first_int, CAST ( sample_detail.interval_msec AS DECIMAL ( 9 ) ) AS mid_int, CAST ( LAST_VALUE ( sample_detail.interval_msec ) OVER three_rows AS DECIMAL ( 9 ) ) AS last_int, CAST ( FIRST_VALUE ( sample_detail.interval_CPU_percent ) OVER three_rows AS DECIMAL ( 5 ) ) AS first_CPU, CAST ( sample_detail.interval_CPU_percent AS DECIMAL ( 5 ) ) AS mid_CPU, CAST ( LAST_VALUE ( sample_detail.interval_CPU_percent ) OVER three_rows AS DECIMAL ( 5 ) ) AS last_CPU FROM sample_detail WHERE sample_detail.sample_lost = 'N' WINDOW three_rows AS ( ORDER BY sample_detail.sample_recorded_at ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING ) ORDER BY sample_detail.sample_recorded_at ASC ) SELECT TOP 10 CAST ( sample_connection.connection_number AS DECIMAL ( 10 ) ) AS "Conn #", CAST ( sample_connection.interval_CPU_percent AS DECIMAL ( 5 ) ) AS "Conn CPU", CAST ( LEFT ( sample_connection.userid, 15 ) AS VARCHAR ( 15 ) ) AS "User", triple.* FROM triple INNER JOIN ( SELECT * FROM sample_connection WHERE sample_connection.interval_CPU_percent >= 5.0 AND sample_connection.connection_number BETWEEN 1 AND 999999999 ) AS sample_connection ON sample_connection.sample_set_number IN ( triple.first_samp, triple.mid_samp, triple.last_samp ) WHERE triple.sampling_id = 3 -- CHANGE THIS IF NECESSARY AND triple.mid_int >= 15000 ORDER BY "Conn CPU" DESC; -- Note: Successive sample_set_numbers are often not contiguous because the missing (intervening) values are -- used for other target databases (different values of sampling_id). Conn # Conn CPU User sampling_id first_samp mid_samp last_samp first_int mid_int last_int first_CPU mid_CPU last_CPU ------------ -------- --------------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- --------- ------- -------- 8839201 13 DBA 3 2289684 2289692 2289701 9875 15501 5265 27 23 19 1337447 13 a.Fernandez 3 2495586 2495597 2495599 8875 16235 3672 28 24 43 1207731 13 d.Rodriguez 3 2486345 2486356 2486362 9109 15282 6141 21 22 33 1279629 12 j.Garcia 3 2495586 2495597 2495599 8875 16235 3672 28 24 43 1249806 12 m.Sanchez 3 2488806 2488818 2488823 9344 17219 2735 23 16 23 1328308 12 e.Perez 3 2499412 2499423 2499430 5563 16875 3250 19 20 27 8959927 12 m.Sanchez 3 2298225 2298233 2298246 8656 15391 11047 33 27 30 1457194 12 ACCTadmin 3 2502434 2502445 2502454 2937 17594 21000 32 24 20 1457194 12 ACCTadmin 3 2502416 2502431 2502434 9609 17798 2937 40 25 32 669685 12 f.Jones 3 2434928 2434939 2434946 8828 17703 3500 46 31 34
-- A sudden impact connection starts using a very high level of CPU as soon as it connects. -- This query looks for the highest sample_connection.interval_CPU_percent values in the same samples the connections started. SELECT TOP 10 sample_connection.sampling_id, sample_connection.sample_set_number, STRING ( sample_connection.connection_number, ' / ', IF TRIM ( COALESCE ( sample_connection.Userid, '' ) ) = '' THEN '-' ELSE sample_connection.Userid ENDIF, ' / ', IF TRIM ( COALESCE ( sample_connection.OSUser, '' ) ) = '' THEN '-' ELSE sample_connection.OSUser ENDIF, ' / ', IF TRIM ( COALESCE ( sample_connection.NodeAddress, '' ) ) IN ( '', 'NA' ) THEN '-' ELSE sample_connection.NodeAddress ENDIF, ' / ', IF COALESCE ( sample_connection.Name, '' ) = '' THEN '-' ELSE sample_connection.Name ENDIF ) AS "Conn #, User, OS User, IP, Name", sample_connection.interval_CPU_percent AS interval_cpu_percent, sample_connection.LastStatement AS LastStatement FROM sample_connection INNER JOIN sample_detail ON sample_detail.sample_set_number = sample_connection.sample_set_number WHERE sample_connection.ReqStatus = 'Executing' -- this connection is busy AND sample_connection.time_connected -- this connection started at some point after the previous sample was recorded < COALESCE ( sample_detail.interval_msec, 0 ) AND sample_connection.interval_CPU_percent > 0.0 -- this connection had an immediate impact AND TRIM ( COALESCE ( sample_connection.LastStatement, '' ) ) <> '' ORDER BY sample_connection.interval_cpu_percent DESC, sample_connection.sample_set_number ASC;
-- The WITH max_temp view calculates the maximum size of the temporary file for each -- instance (start time) of the SQL Anywhere server. WITH max_temp AS ( SELECT sample_detail.sampling_id, sample_detail.StartTime AS "max_StartTime", MAX ( sample_detail.tempFileSize ) AS "max_tempFilePages" FROM sample_detail WHERE sample_detail.sample_lost = 'N' GROUP BY sample_detail.sampling_id, sample_detail.StartTime ) SELECT CAST ( DATEFORMAT ( sample_detail.StartTime, 'yyyy-mm-dd hh:nn:ss' ) AS VARCHAR ( 20 ) ) AS "Server Start At", CAST ( DATEFORMAT ( MIN ( sample_detail.sample_recorded_at ), 'yyyy-mm-dd hh:nn:ss' ) AS VARCHAR ( 20 ) ) AS "High Tide At", CAST ( max_temp."max_tempFilePages" AS DECIMAL ( 11 ) ) AS "Temp File Pages", CAST ( rroad_f_bytes_as_kmg ( max_temp."max_tempFilePages" * sample_detail.page_size ) AS VARCHAR ( 14 ) ) AS "Temp File KMG" FROM sample_detail INNER JOIN max_temp ON max_temp.sampling_id = sample_detail.sampling_id AND max_temp."max_StartTime" = sample_detail.StartTime AND max_temp."max_tempFilePages" = sample_detail.tempFileSize WHERE sample_detail.sampling_id = 3 -- CHANGE THIS IF NECESSARY AND sample_detail.sample_lost = 'N' GROUP BY sample_detail.StartTime, max_temp."max_tempFilePages", sample_detail.page_size ORDER BY sample_detail.StartTime; Server Start At High Tide At Temp File Pages Temp File KMG -------------------- -------------------- --------------- -------------- 2018-11-11 07:40:08 2018-12-03 11:43:51 28417 888M 2018-12-05 04:12:38 2018-12-05 09:39:16 21337 667M
-- The WITH normalized_sample_connection creates a new view with two more columns than the original sample_connection view. -- Note that rows in sample_connection will be missing from the new view if the flags_set column is filled with zeros. WITH normalized_sample_connection AS ( -- The LEFT ( REPLACE ( REPLACE turns a VARBIT ( 30 ) string 0101... into a VARCHAR ( 59 ) string 0,1,0,1,... -- The sa_split_list changes the comma-delimited string into result set. -- The LATERAL join expands each sample_connection row with one or more non-zero flags_set characters. -- Note that sample_connection rows with all zeros in flags_set will NOT be included in the result set. SELECT sample_connection.*, flags_set.flag_number, flag_title.flag_title FROM sample_connection, LATERAL ( SELECT line_num AS flag_number, row_value AS flag_set_0_1 FROM sa_split_list ( LEFT ( REPLACE ( REPLACE ( CAST ( sample_connection.flags_set AS VARCHAR ( 30 ) ), '0', '0,' ), '1', '1,' ), 59 ) ) ) AS flags_set, flag_title WHERE flags_set.flag_set_0_1 = '1' AND flags_set.flag_number = flag_title.flag_number ) SELECT normalized_sample_connection.connection_id_string, normalized_sample_connection.flag_number, normalized_sample_connection.flag_title FROM normalized_sample_connection WHERE normalized_sample_connection.sample_set_number = 408060 AND normalized_sample_connection.connection_number = 32 ORDER BY normalized_sample_connection.connection_id_string, normalized_sample_connection.flag_number; connection_id_string flag_number flag_title -------------------------------------------------- ----------- -------------------------------------------------- 2-32-20190520153639-985 4 Temp space usage 2-32-20190520153639-985 6 Locks 2-32-20190520153639-985 8 Uncommitted operations 2-32-20190520153639-985 13 Disk write rate 2-32-20190520153639-985 20 OS user usage 2-32-20190520153639-985 22 Connection name usage 2-32-20190520153639-985 25 Total disk reads 2-32-20190520153639-985 26 Total disk writes 2-32-20190520153639-985 28 Total index adds
-- Count connection-level samples with very high levels of CPU usage. -- Copy and paste the "First Sample At" value into the "Go to:" field on the Sample History page -- to scroll to first sample for the very-high-CPU connection. -- The high_conn view gathers data from qualifying individual connection-level samples, -- and the outer SELECT groups the data for display. WITH high_conn AS ( SELECT sampling_options.sampling_id AS "ID", DATEFORMAT ( sample_detail.sample_recorded_at, 'yyyy-mm-dd hh:nn:ss.sss' ) AS "Conn Sample At", CAST ( sample_connection.interval_CPU_percent AS DECIMAL ( 5, 1 ) ) AS "Conn CPU %", CAST ( sample_connection.connection_number AS DECIMAL ( 11 ) ) AS "Conn #", CAST ( LEFT ( STRING ( sample_connection.Userid, ' - ', sample_connection.Name ), 25 ) AS VARCHAR ( 25 ) ) AS "User - Conn Name", CAST ( LEFT ( LTRIM ( REPLACE ( REPLACE ( REPLACE ( COALESCE ( sample_connection.LastStatement, '' ), '\X0A', ' ' ), '\X0D', ' ' ), '\X09', ' ' ) ), 30 ) AS VARCHAR ( 30) ) AS "LastStatement" FROM sample_detail INNER JOIN sample_connection ON sample_connection.sample_set_number = sample_detail.sample_set_number INNER JOIN sampling_options ON sampling_options.sampling_id = sample_connection.sampling_id WHERE sample_connection.interval_CPU_percent > 90.0 -- include only very-high-CPU connection samples AND sample_detail.interval_msec >= 5000 -- eliminate abnormally short sample intervals AND sample_connection.connection_number < 1000000000 -- eliminate events, web services and internal connections ) SELECT TOP 100 CAST ( COUNT(*) AS DECIMAL ( 5 ) ) AS "Count", "Conn CPU %", "ID", MIN ( "Conn Sample At" ) AS "First Sample At", "Conn #", "User - Conn Name", "LastStatement" FROM high_conn GROUP BY "Conn CPU %", "ID", "Conn #", "User - Conn Name", "LastStatement" ORDER BY "Conn CPU %" DESC, "Count" DESC, "ID" ASC, "Conn #" ASC; Count Conn CPU % ID First Sample At Conn # User - Conn Name LastStatement ------- ---------- ---------- ----------------------- ------------- ------------------------- ------------------------------ 7 100.0 5 2018-04-10 21:02:00.521 11494 DAugust - GEN_HistoryItem call generalHistItemsLoad(:brn 4 100.0 5 2018-04-09 20:31:01.684 128336 GHahnst - SQL_DBC_4d00ba8 Select Distinct AccountNumber 3 100.0 5 2018-05-11 21:23:24.448 48 JWellington - SQL_DBC_4af Select Distinct AccountNumber 3 100.0 5 2018-05-14 17:47:55.486 16977 JWellington - SQL_DBC_509 Select Distinct AccountNumber 3 100.0 5 2018-04-23 16:29:41.554 223432 JWellington - SQL_DBC_492 Select Distinct AccountNumber
The Foxhound adhoc view connection_string has been expanded to include three new OFSS columns:
-- CREATE VIEW dba.connection_string ( -- connection_string_name, -- VARCHAR ( 128 ) -- connection_string, -- LONG VARCHAR -- OFSS_subject_uuid, -- VARCHAR ( 36 ) *** NEW *** -- OFSS_subject_name, -- VARCHAR ( 80 ) *** NEW *** -- OFSS_subject_name_index ) -- INTEGER *** NEW *** create VIEW connection_string AS SELECT * FROM rroad_connection_string ;OFSS subject databases are identified by the suffix "(offline)" appended to the connection_string_name values. These rows do appear in the Monitor tab of the Foxhound Menu page, but not the String tab because they cannot be manually created or edited; they can only be added when OFSS samples are loaded via rroad_OFSS_load().
Here's what an OFSS subject database looks like in the Foxhound connection_string view:
SELECT * FROM connection_string WHERE connection_string_name LIKE '%(offline)' ORDER BY connection_string_name; connection_string_name, connection_string, OFSS_subject_uuid, OFSS_subject_name, OFSS_subject_name_index 'inventory17a (offline)', '(offline)', '80dc1d72-292a-4639-8be7-dbf1d5cd7c2c', 'inventory17a', 1The connection_string_name column is the primary key for the underlying rroad_connection_string table, and that's the value you see in these locations in Foxhound:
- the list of databases on Monitor tab of the Foxhound Menu page, and
- the title on Sample History page.
The OFSS_subject_name column contains the original subject_name value received in the input OFSS files for a particular OFSS subject database. The OFSS_subject_uuid values are unique, but not OFSS_subject_name, and duplicate values of OFSS_subject_name are assigned OFSS_subject_name_index values 1, 2, 3 which are used to create unique connection_string_name values.
For example, if three separate OFSS subject databases all have the same subject_name = 'inventory17a', here's what the Foxhound connection_string view looks like:
connection_string_name, connection_string, OFSS_subject_uuid, OFSS_subject_name, OFSS_subject_name_index 'inventory17a (offline)', '(offline)', '80dc1d72-292a-4639-8be7-dbf1d5cd7c2c', 'inventory17a', 1 'inventory17a (002) (offline)', '(offline), '06bffd58-425a-4094-9927-e4bfeaf3b114', 'inventory17a', 2 'inventory17a (003) (offline)', '(offline)', 'd9593d27-7789-4c02-b475-d9de0578299b', 'inventory17a', 3Note that you can change OFSS_settings.subject_name on your subject database, but if you wait until after sampling starts your change probably won't have any effect on what you see in Foxhound. That's because the original subject_name is stored in the first batch and used to create the connection_string_name when it reaches Foxhound.
The best way to change the subject_name is to edit and rerun the OFSS_1_setup.sql script on your subject database. That creates a new subject_uuid value which in turn forces the creation of a new sampling session (and connection_string_name) in Foxhound.
For example, here's where you can change the subject_name and unload_path in OFSS_1_setup.sql:
... INSERT "{OWNER}".OFSS_settings ( subject_uuid, subject_name, sampling_is_to_continue, unload_path, server_messages_are_to_be_captured, server_started_at, msg_id ) VALUES ( UUIDTOSTR ( NEWID() ), -- unchangeable and unique across all instances of this client software LEFT ( DB_PROPERTY ( 'Name' ), 80 ), -- UPDATE this to differentiate among multiple like-named OFSS subject databases 'Y', -- UPDATE this to 'N' to stop sampling 'C:\\temp\\', -- UPDATE this to specify a different output path 'Y', -- UPDATE this to 'N' to stop calling sa_server_messages() CAST ( PROPERTY ( 'StartTime' ) AS TIMESTAMP ), -- don't UPDATE this NULL ); -- don't UPDATE this ...
The rroad_recalculate_flags procedure was a last-minute addition to Foxhound Version 5. It lets you change the Connection Flag Settings on the Monitor Options page and then put those settings into effect for all the samples that have already been gathered.
There's no button or other GUI interface for this procedure yet, but you can call it directly from All Programs - Foxhound5 - 11 Admin Update via ISQL.
Administrative Note: You cannot use All Programs - Foxhound5 - 2 Adhoc Query via ISQL to call rroad_recalculate_flags().That's because adhoc queries can be used in Read-Only mode when Administrator Authentication is in effect, and rroad_recalculate_flags() modifies the Foxhound database.
Here's an example of how to recalculate all the Connection Flags for a target database that has been assigned sampling_id = 2:
CALL rroad_recalculate_flags ( 2 );This example recalculates the flags for all the target databases from sampling_id = 1 to 10:
CALL rroad_recalculate_flags ( 1, 10 );This example recalculates the flags for all the target databases that exist, as well as writing progress messages to the Foxhound console and text file C:\ProgramData\RisingRoad\Foxhound5\foxhound5_debug.txt:
CALL rroad_recalculate_flags ( 1, 4294967295, 'Y' ); I. 06/22 08:10:45. 2019-06-22 08:10:45.468 Recalculation of connection flags started... I. 06/22 08:10:45. 2019-06-22 08:10:45.483 Recalculation of connection flags for sampling_id 1 started... I. 06/22 08:11:04. 2019-06-22 08:11:04.939 ...200000 connection flag row updates OK I. 06/22 08:11:24. 2019-06-22 08:11:24.487 ...400000 connection flag row updates OK I. 06/22 08:11:43. 2019-06-22 08:11:43.988 ...600000 connection flag row updates OK I. 06/22 08:12:03. 2019-06-22 08:12:03.855 ...800000 connection flag row updates OK I. 06/22 08:12:14. 2019-06-22 08:12:14.915 Recalculation of connection flags for sampling_id 2 started... I. 06/22 08:12:23. 2019-06-22 08:12:23.337 ...1000000 connection flag row updates OK I. 06/22 08:12:43. 2019-06-22 08:12:43.494 ...1200000 connection flag row updates OK I. 06/22 08:13:02. 2019-06-22 08:13:02.397 ...1400000 connection flag row updates OK I. 06/22 08:13:21. 2019-06-22 08:13:21.040 ...1600000 connection flag row updates OK I. 06/22 08:13:39. 2019-06-22 08:13:39.051 ...1800000 connection flag row updates OK I. 06/22 08:13:57. 2019-06-22 08:13:57.105 ...2000000 connection flag row updates OK I. 06/22 08:14:15. 2019-06-22 08:14:15.032 ...2200000 connection flag row updates OK I. 06/22 08:14:32. 2019-06-22 08:14:32.602 ...2400000 connection flag row updates OK I. 06/22 08:14:50. 2019-06-22 08:14:50.237 ...2600000 connection flag row updates OK I. 06/22 08:15:08. 2019-06-22 08:15:08.367 ...2800000 connection flag row updates OK I. 06/22 08:15:26. 2019-06-22 08:15:26.454 ...3000000 connection flag row updates OK I. 06/22 08:15:44. 2019-06-22 08:15:44.554 ...3200000 connection flag row updates OK I. 06/22 08:16:02. 2019-06-22 08:16:02.902 ...3400000 connection flag row updates OK I. 06/22 08:16:11. 2019-06-22 08:16:11.878 ...3494989 connection flag row updates all OK I. 06/22 08:16:11. 2019-06-22 08:16:11.885 ...Recalculation of connection flags finished OKHere is how to modify the first example to add progress messages:
CALL rroad_recalculate_flags ( 2, 2, 'Y' );Here's how the parameters are declared:
CREATE PROCEDURE rroad_recalculate_flags ( IN @input_from_sampling_id UNSIGNED INTEGER, IN @input_to_sampling_id UNSIGNED INTEGER DEFAULT 0, IN @send_messages_to_console VARCHAR ( 1 ) DEFAULT 'N' )
The rroad_recalculate_peaks procedure was a last-minute addition to Foxhound Version 5. It lets you restore the values on the "Peaks since" row on the Monitor and History pages after you have used the Reset Peaks buttons to set old values to zero.
There's no button or other GUI interface for this procedure yet, but you can call it directly from All Programs - Foxhound5 - 11 Admin Update via ISQL.
Administrative Note: You cannot use All Programs - Foxhound5 - 2 Adhoc Query via ISQL to call rroad_recalculate_peaks().That's because adhoc queries can be used in Read-Only mode when Administrator Authentication is in effect, and rroad_recalculate_peaks() modifies the Foxhound database.
Here's an example of how to recalculate all the Peaks values for a target database that has been assigned sampling_id = 2:
CALL rroad_recalculate_peaks ( 2 );This example recalculates the peaks for all the target databases from sampling_id = 1 to 10:
CALL rroad_recalculate_peaks ( 1, 10 );This example recalculates the peaks for all the target databases that exist, as well as writing progress messages to the Foxhound console and text file C:\ProgramData\RisingRoad\Foxhound5\foxhound5_debug.txt:
CALL rroad_recalculate_peaks ( 1, 4294967295, 'Y' ); I. 06/22 14:07:07. 2019-06-22 14:07:07.702 Recalculation of peaks started... I. 06/22 14:07:07. 2019-06-22 14:07:07.707 Recalculation of peaks for sampling_id 1 started... I. 06/22 14:07:08. 2019-06-22 14:07:08.721 Recalculation of peaks for sampling_id 2 started... I. 06/22 14:07:09. 2019-06-22 14:07:09.744 ...14946 rows processed all OK I. 06/22 14:07:09. 2019-06-22 14:07:09.747 ...Recalculation of peaks finished OKHere is how to modify the first example to add progress messages:
CALL rroad_recalculate_peaks ( 2, 2, 'Y' );Here's how the parameters are declared:
CREATE PROCEDURE rroad_recalculate_peaks ( IN @input_from_sampling_id UNSIGNED INTEGER, IN @input_to_sampling_id UNSIGNED INTEGER DEFAULT 0, IN @send_messages_to_console VARCHAR ( 1 ) DEFAULT 'N' )