I260425 04:02:35.734110 1 util/log/file_sync_buffer.go:238 ⋮ [config] file created at: 2026/04/25 04:02:35 I260425 04:02:35.734113 1 util/log/file_sync_buffer.go:238 ⋮ [config] running on machine: ‹w-01KQ1BC5HRWJ5SP7TBQMAPZDXM› I260425 04:02:35.734115 1 util/log/file_sync_buffer.go:238 ⋮ [config] binary: CockroachDB OSS v22.1.22-64-g86fdbfca06 (x86_64-linux-gnu, built 2026/03/18 01:46:58, go1.22.11) I260425 04:02:35.734118 1 util/log/file_sync_buffer.go:238 ⋮ [config] arguments: [‹cockroach› ‹start-single-node› ‹--insecure› ‹--http-addr=:0› ‹--store=path=/var/tmp/omicron_tmp/.tmph1pXNC/data,ballast-size=0› ‹--listen-addr› ‹[::1]:0› ‹--listening-url-file› ‹/var/tmp/omicron_tmp/.tmph1pXNC/listen-url› ‹--max-sql-memory› ‹256MiB›] I260425 04:02:35.734124 1 util/log/file_sync_buffer.go:238 ⋮ [config] log format (utf8=✓): crdb-v2 I260425 04:02:35.734125 1 util/log/file_sync_buffer.go:238 ⋮ [config] line format: [IWEF]yymmdd hh:mm:ss.uuuuuu goid [chan@]file:line redactionmark \[tags\] [counter] msg W260425 04:02:35.734032 1 1@cli/start.go:1180 ⋮ [n?] 1 ALL SECURITY CONTROLS HAVE BEEN DISABLED! W260425 04:02:35.734032 1 1@cli/start.go:1180 ⋮ [n?] 1 + W260425 04:02:35.734032 1 1@cli/start.go:1180 ⋮ [n?] 1 +This mode is intended for non-production testing only. W260425 04:02:35.734032 1 1@cli/start.go:1180 ⋮ [n?] 1 + W260425 04:02:35.734032 1 1@cli/start.go:1180 ⋮ [n?] 1 +In this mode: W260425 04:02:35.734032 1 1@cli/start.go:1180 ⋮ [n?] 1 +- Your cluster is open to any client that can access ‹::1›. W260425 04:02:35.734032 1 1@cli/start.go:1180 ⋮ [n?] 1 +- Intruders with access to your machine or network can observe client-server traffic. W260425 04:02:35.734032 1 1@cli/start.go:1180 ⋮ [n?] 1 +- Intruders can log in without password and read or write any data in the cluster. W260425 04:02:35.734032 1 1@cli/start.go:1180 ⋮ [n?] 1 +- Intruders can consume all your server's resources and cause unavailability. I260425 04:02:35.734168 1 1@cli/start.go:1190 ⋮ [n?] 2 To start a secure server without mandating TLS for clients, I260425 04:02:35.734168 1 1@cli/start.go:1190 ⋮ [n?] 2 +consider --accept-sql-without-tls instead. For other options, see: I260425 04:02:35.734168 1 1@cli/start.go:1190 ⋮ [n?] 2 + I260425 04:02:35.734168 1 1@cli/start.go:1190 ⋮ [n?] 2 +- ‹https://go.crdb.dev/issue-v/53404/v22.1› I260425 04:02:35.734168 1 1@cli/start.go:1190 ⋮ [n?] 2 +- https://www.cockroachlabs.com/docs/v22.1/secure-a-cluster.html I260425 04:02:35.734315 1 server/status/recorder.go:620 ⋮ [n?] 3 ‹available memory from cgroups (8.0 EiB) is unsupported, using system memory 31 GiB instead:› W260425 04:02:35.734327 1 1@cli/start.go:1106 ⋮ [n?] 4 ‹Using the default setting for --cache (128 MiB).› W260425 04:02:35.734327 1 1@cli/start.go:1106 ⋮ [n?] 4 +‹ A significantly larger value is usually needed for good performance.› W260425 04:02:35.734327 1 1@cli/start.go:1106 ⋮ [n?] 4 +‹ If you have a dedicated server a reasonable setting is --cache=.25 (7.7 GiB).› I260425 04:02:35.734392 1 server/status/recorder.go:620 ⋮ [n?] 5 ‹available memory from cgroups (8.0 EiB) is unsupported, using system memory 31 GiB instead:› I260425 04:02:35.734396 1 1@cli/start.go:1219 ⋮ [n?] 6 ‹CockroachDB OSS v22.1.22-64-g86fdbfca06 (x86_64-linux-gnu, built 2026/03/18 01:46:58, go1.22.11)› I260425 04:02:35.734666 1 server/status/recorder.go:620 ⋮ [n?] 7 ‹available memory from cgroups (8.0 EiB) is unsupported, using system memory 31 GiB instead:› I260425 04:02:35.734672 1 server/config.go:487 ⋮ [n?] 8 system total memory: 31 GiB I260425 04:02:35.734679 1 server/config.go:489 ⋮ [n?] 9 server configuration: I260425 04:02:35.734679 1 server/config.go:489 ⋮ [n?] 9 +‹max offset 500000000› I260425 04:02:35.734679 1 server/config.go:489 ⋮ [n?] 9 +‹cache size 128 MiB› I260425 04:02:35.734679 1 server/config.go:489 ⋮ [n?] 9 +‹SQL memory pool size 256 MiB› I260425 04:02:35.734679 1 server/config.go:489 ⋮ [n?] 9 +‹scan interval 10m0s› I260425 04:02:35.734679 1 server/config.go:489 ⋮ [n?] 9 +‹scan min idle time 10ms› I260425 04:02:35.734679 1 server/config.go:489 ⋮ [n?] 9 +‹scan max idle time 1s› I260425 04:02:35.734679 1 server/config.go:489 ⋮ [n?] 9 +‹event log enabled true› I260425 04:02:35.734679 1 server/config.go:489 ⋮ [n?] 9 +‹span configs enabled true› I260425 04:02:35.734724 1 1@cli/start.go:1084 ⋮ [n?] 10 using local environment variables: I260425 04:02:35.734724 1 1@cli/start.go:1084 ⋮ [n?] 10 +GOTRACEBACK=crash I260425 04:02:35.734724 1 1@cli/start.go:1084 ⋮ [n?] 10 +LANG=‹en_US.UTF-8› I260425 04:02:35.734724 1 1@cli/start.go:1084 ⋮ [n?] 10 +LC_ALL=‹en_US.UTF-8› I260425 04:02:35.734724 1 1@cli/start.go:1084 ⋮ [n?] 10 +TZ=‹UTC› I260425 04:02:35.734730 1 1@cli/start.go:1091 ⋮ [n?] 11 process identity: ‹uid 12345 euid 12345 gid 12345 egid 12345› W260425 04:02:35.734844 1 1@cli/start.go:1304 ⋮ [n?] 12 could not initialize GEOS - spatial functions may not be available: geos: error during GEOS init: geos: cannot load GEOS from dir ‹"/usr/local/lib/cockroach"›: ‹geos error: /usr/local/lib/cockroach/libgeos.so: cannot open shared object file: No such file or directory› I260425 04:02:35.734873 1 1@cli/start.go:581 ⋮ [n?] 13 ‹starting cockroach node› I260425 04:02:35.871985 83 server/config.go:665 ⋮ [n?] 14 1 storage engine‹› initialized I260425 04:02:35.872006 83 server/config.go:668 ⋮ [n?] 15 Pebble cache size: 128 MiB I260425 04:02:35.872135 83 server/config.go:668 ⋮ [n?] 16 store 0: max size 0 B, max open file limit 519288 I260425 04:02:35.872142 83 server/config.go:668 ⋮ [n?] 17 store 0: {Encrypted:false ReadOnly:false FileStoreProperties:{path=‹/var/tmp/omicron_tmp/.tmph1pXNC/data›, fs=nsfs, blkdev=‹nsfs›, mnt=‹/run/snapd/ns/lxd.mnt› opts=‹rw›}} I260425 04:02:35.873383 83 server/tracedumper/tracedumper.go:120 ⋮ [n?] 18 writing job trace dumps to ‹/var/tmp/omicron_tmp/.tmph1pXNC/data/logs/inflight_trace_dump› I260425 04:02:35.913096 83 1@server/clock_monotonicity.go:65 ⋮ [n?] 19 ‹monitoring forward clock jumps based on server.clock.forward_jump_check_enabled› I260425 04:02:35.918591 196 1@server/server.go:1294 ⋮ [n1] 20 connecting to gossip network to verify cluster ID ‹"8fc6d95a-30a9-435a-adb5-a4d4486cd88f"› I260425 04:02:35.918618 83 1@server/clock_monotonicity.go:140 ⋮ [n1] 21 Sleeping till wall time 1777089755918616245 to catches up to 1777089756413090093 to ensure monotonicity. Delta: 494.473848ms I260425 04:02:36.413830 83 1@cli/start.go:521 ⋮ [n1] 22 listening URL file: ‹/var/tmp/omicron_tmp/.tmph1pXNC/listen-url› W260425 04:02:36.428198 83 1@gossip/gossip.go:1531 ⋮ [n1] 23 no addresses found; use --join to specify a connected node I260425 04:02:36.428235 83 gossip/gossip.go:401 ⋮ [n1] 24 NodeDescriptor set to ‹node_id:1 address: attrs:<> locality:<> ServerVersion: build_tag:"v22.1.22-64-g86fdbfca06" started_at:1777089756428230368 cluster_name:"" sql_address: http_address:› I260425 04:02:36.430778 83 server/node.go:470 ⋮ [n1] 25 initialized store s1 I260425 04:02:36.430798 83 kv/kvserver/stores.go:250 ⋮ [n1] 26 read 0 node addresses from persistent storage I260425 04:02:36.430894 196 1@server/server.go:1297 ⋮ [n1] 27 node connected via gossip I260425 04:02:36.435972 83 server/node.go:549 ⋮ [n1] 28 started with engine type ‹2› I260425 04:02:36.435991 83 server/node.go:551 ⋮ [n1] 29 started with attributes [] I260425 04:02:36.436017 83 server/goroutinedumper/goroutinedumper.go:122 ⋮ [n1] 30 writing goroutine dumps to ‹/var/tmp/omicron_tmp/.tmph1pXNC/data/logs/goroutine_dump› I260425 04:02:36.436037 83 server/heapprofiler/heapprofiler.go:49 ⋮ [n1] 31 writing go heap profiles to ‹/var/tmp/omicron_tmp/.tmph1pXNC/data/logs/heap_profiler› at least every 1h0m0s I260425 04:02:36.436046 83 server/heapprofiler/cgoprofiler.go:53 ⋮ [n1] 32 to enable jmalloc profiling: "export MALLOC_CONF=prof:true" or "ln -s prof:true /etc/malloc.conf" I260425 04:02:36.436053 83 server/heapprofiler/statsprofiler.go:54 ⋮ [n1] 33 writing memory stats to ‹/var/tmp/omicron_tmp/.tmph1pXNC/data/logs/heap_profiler› at last every 1h0m0s I260425 04:02:36.436173 83 server/heapprofiler/activequeryprofiler.go:85 ⋮ [n1] 34 writing go query profiles to ‹/var/tmp/omicron_tmp/.tmph1pXNC/data/logs/heap_profiler› I260425 04:02:36.436207 83 1@server/server.go:1423 ⋮ [n1] 35 starting http server at ‹[::1]:42453› (use: ‹[::1]:42453›) I260425 04:02:36.436218 83 1@server/server.go:1430 ⋮ [n1] 36 starting grpc/postgres server at ‹[::1]:40913› I260425 04:02:36.436226 83 1@server/server.go:1431 ⋮ [n1] 37 advertising CockroachDB node at ‹[::1]:40913› I260425 04:02:36.513818 364 kv/kvserver/liveness/liveness.go:775 ⋮ [n1,liveness-hb] 38 heartbeat failed on epoch increment; retrying E260425 04:02:36.513919 394 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r3/1:‹/System/{NodeLive…-tsd}›] 39 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.513984 388 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r1/1:‹/{Min-System/NodeL…}›] 40 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.514008 388 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r2/1:‹/System/NodeLiveness{-Max}›] 41 1 matching stores are currently throttled: [‹s1: draining›] I260425 04:02:36.514016 388 kv/kvserver/queue.go:1206 ⋮ [n1,replicate] 42 purgatory is now empty I260425 04:02:36.522981 419 kv/kvclient/rangefeed/rangefeedcache/watcher.go:316 ⋮ [n1] 43 spanconfig-subscriber: established range feed cache E260425 04:02:36.526609 409 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r26/1:‹/NamespaceTable/{30-Max}›] 44 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.526655 415 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r6/1:‹/Table/{SystemCon…-11}›] 45 1 matching stores are currently throttled: [‹s1: draining›] I260425 04:02:36.532629 83 1@util/log/event_log.go:32 ⋮ [n1] 46 ={"Timestamp":1777089756532627616,"EventType":"node_restart","NodeID":1,"StartedAt":1777089756428230368,"LastUp":1777088814297190870} I260425 04:02:36.532664 83 sql/sqlliveness/slinstance/slinstance.go:313 ⋮ [n1] 47 starting SQL liveness instance E260425 04:02:36.539868 434 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r43/1:‹/Table/{47-50}›] 48 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.542654 468 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r7/1:‹/Table/1{1-2}›] 49 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.546444 580 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r42/1:‹/Table/4{6-7}›] 50 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.546483 472 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r35/1:‹/Table/{39-40}›] 51 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.546510 473 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r54/1:‹/Table/1{19/10-42/12}›] 52 1 matching stores are currently throttled: [‹s1: draining›] I260425 04:02:36.546549 446 sql/sqlstats/persistedsqlstats/provider.go:132 ⋮ [n1] 53 starting sql-stats-worker with initial delay: 11m10.221490394s I260425 04:02:36.546597 444 sql/temporary_schema.go:514 ⋮ [n1] 54 running temporary object cleanup background job I260425 04:02:36.546649 603 kv/kvclient/rangefeed/rangefeedcache/watcher.go:316 ⋮ [n1] 55 settings-watcher: established range feed cache I260425 04:02:36.547016 444 sql/temporary_schema.go:559 ⋮ [n1] 56 found 0 temporary schemas I260425 04:02:36.547028 444 sql/temporary_schema.go:562 ⋮ [n1] 57 early exiting temporary schema cleaner as no temporary schemas were found I260425 04:02:36.547033 444 sql/temporary_schema.go:563 ⋮ [n1] 58 completed temporary object cleanup job I260425 04:02:36.547038 444 sql/temporary_schema.go:646 ⋮ [n1] 59 temporary object cleaner next scheduled to run at 2026-04-25 04:32:36.546580499 +0000 UTC I260425 04:02:36.547043 522 kv/kvclient/rangefeed/rangefeedcache/watcher.go:316 ⋮ [n1] 60 system-config-cache: established range feed cache I260425 04:02:36.547041 83 server/server_sql.go:1287 ⋮ [n1] 61 done ensuring all necessary startup migrations have run I260425 04:02:36.547067 610 kv/kvserver/replica_rangefeed.go:700 ⋮ [n1,rangefeed=‹lease›,s1,r6] 62 RangeFeed closed timestamp 1777088810.469353191,0 is behind by 15m46.077713309s I260425 04:02:36.547111 524 jobs/job_scheduler.go:433 ⋮ [n1] 63 waiting 3m0s before scheduled jobs daemon start I260425 04:02:36.547187 528 kv/kvclient/rangefeed/rangefeedcache/watcher.go:316 ⋮ [n1] 64 tenant-settings-watcher: established range feed cache E260425 04:02:36.547597 629 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r44/1:‹/Table/{50-112/2}›] 65 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.551854 634 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r4/1:‹/System{/tsd-tse}›] 66 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.553083 527 server/auto_upgrade.go:66 ⋮ [n1] 67 failed attempt to upgrade cluster version, error: no live nodes found I260425 04:02:36.565143 423 sql/sqlliveness/slstorage/slstorage.go:460 ⋮ [n1] 68 inserted sqlliveness session 25fb916eba9e44899f6422b59bd7a1a8 I260425 04:02:36.565183 423 sql/sqlliveness/slinstance/slinstance.go:198 ⋮ [n1] 69 created new SQL liveness session 25fb916eba9e44899f6422b59bd7a1a8 E260425 04:02:36.565609 683 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r56/1:‹/Table/{142/12-267/4}›] 70 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.565696 127 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r31/1:‹/Table/3{5-6}›] 71 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.565729 128 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r8/1:‹/Table/1{2-3}›] 72 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.565761 129 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r33/1:‹/Table/3{7-8}›] 73 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.565791 738 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r11/1:‹/Table/1{5-6}›] 74 1 matching stores are currently throttled: [‹s1: draining›] E260425 04:02:36.565821 739 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r27/1:‹/{NamespaceTab…-Table/32}›] 75 1 matching stores are currently throttled: [‹s1: draining›] I260425 04:02:36.567103 83 1@server/server_sql.go:1350 ⋮ [n1] 76 ‹serving sql connections› I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 node startup completed: I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +CockroachDB node starting at 2026-04-25 04:02:36.567150518 +0000 UTC (took 0.8s) I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +build: OSS v22.1.22-64-g86fdbfca06 @ 2026/03/18 01:46:58 (go1.22.11) I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +webui: ‹http://[::1]:42453› I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +sql: ‹postgresql://root@[::1]:40913/defaultdb?sslmode=disable› I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +sql (JDBC): ‹jdbc:postgresql://[::1]:40913/defaultdb?sslmode=disable&user=root› I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +RPC client flags: ‹cockroach --host=[::1]:40913 --insecure› I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +logs: ‹/var/tmp/omicron_tmp/.tmph1pXNC/data/logs› I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +temp dir: ‹/var/tmp/omicron_tmp/.tmph1pXNC/data/cockroach-temp9424404› I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +external I/O path: ‹/var/tmp/omicron_tmp/.tmph1pXNC/data/extern› I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +store[0]: ‹path=/var/tmp/omicron_tmp/.tmph1pXNC/data› I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +storage engine: pebble I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +clusterID: ‹8fc6d95a-30a9-435a-adb5-a4d4486cd88f› I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +status: restarted pre-existing node I260425 04:02:36.567237 83 1@cli/start.go:1027 ⋮ [n1] 77 +nodeID: 1 I260425 04:02:36.567461 698 sql/sqlliveness/slstorage/slstorage.go:342 ⋮ [n1,intExec=‹expire-sessions›] 78 deleted session 90dd382aa460474dad75a3074ca0e9fb which expired at 1777088852.100396203,0 I260425 04:02:36.570293 652 util/log/event_log.go:32 ⋮ [n1,client=[::1]:57132,user=root] 79 ={"Timestamp":1777089756568408861,"EventType":"set_cluster_setting","Statement":"SET CLUSTER SETTING \"kv.raft_log.disable_synchronization_unsafe\" = true","Tag":"SET CLUSTER SETTING","User":"root","SettingName":"kv.raft_log.disable_synchronization_unsafe","Value":"‹true›"} I260425 04:02:37.556496 527 server/auto_upgrade.go:79 ⋮ [n1] 80 ‹no need to upgrade, cluster already at the newest version› I260425 04:02:38.011746 231 1@gossip/gossip.go:1547 ⋮ [n1] 81 node has connected to cluster via gossip I260425 04:02:38.015056 231 kv/kvserver/stores.go:269 ⋮ [n1] 82 wrote 0 node addresses to persistent storage I260425 04:02:38.984704 814 util/log/event_log.go:32 ⋮ [n1,client=[::1]:57134,user=root] 83 ={"Timestamp":1777089758973357231,"EventType":"set_cluster_setting","Statement":"SET CLUSTER SETTING \"cluster.preserve_downgrade_option\" = CASE encode(digest(crdb_internal.active_version() || crdb_internal.node_executable_version(), ‹'sha1'›), ‹'hex'›) = $1 WHEN ‹true› THEN $2 ELSE ‹NULL› END","Tag":"SET CLUSTER SETTING","User":"root","PlaceholderValues":["‹'d4d87aa2ad877a4cc2fddd0573952362739110de'›","‹'22.1'›"],"SettingName":"cluster.preserve_downgrade_option","Value":"‹22.1›"} I260425 04:02:49.842426 1 1@cli/start.go:749 ⋮ [n1] 84 received signal 'terminated' I260425 04:02:49.842476 1 1@cli/start.go:846 ⋮ [n1] 85 ‹initiating graceful shutdown of server› I260425 04:02:49.842513 3552 1@server/drain.go:342 ⋮ [n1,server drain process] 86 ‹waiting for health probes to notice that the node is not ready for new sql connections› I260425 04:02:49.842522 3552 1@sql/pgwire/server.go:612 ⋮ [n1,server drain process] 87 ‹starting draining SQL connections› I260425 04:02:51.297896 846 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57142,user=root] 88 ‹closing existing connection while server is draining› I260425 04:02:51.528797 844 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57140,user=root] 89 ‹closing existing connection while server is draining› I260425 04:02:51.538035 1541 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57152,user=root] 90 ‹closing existing connection while server is draining› I260425 04:02:51.543236 1554 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57156,user=root] 91 ‹closing existing connection while server is draining› I260425 04:02:51.544301 1556 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57158,user=root] 92 ‹closing existing connection while server is draining› I260425 04:02:51.550532 673 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57162,user=root] 93 ‹closing existing connection while server is draining› I260425 04:02:51.554651 1540 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57164,user=root] 94 ‹closing existing connection while server is draining› I260425 04:02:51.604454 842 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57150,user=root] 95 ‹closing existing connection while server is draining› I260425 04:02:51.613811 850 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57148,user=root] 96 ‹closing existing connection while server is draining› I260425 04:02:51.638251 848 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57144,user=root] 97 ‹closing existing connection while server is draining› I260425 04:02:51.638306 80 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57146,user=root] 98 ‹closing existing connection while server is draining› I260425 04:02:51.646418 1539 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57166,user=root] 99 ‹closing existing connection while server is draining› I260425 04:02:51.658538 1538 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57154,user=root] 100 ‹closing existing connection while server is draining› I260425 04:02:51.673900 834 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57138,user=root] 101 ‹closing existing connection while server is draining› I260425 04:02:51.734484 813 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57134,user=root] 102 ‹closing existing connection while server is draining› I260425 04:02:51.736631 816 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:57136,user=root] 103 ‹closing existing connection while server is draining› I260425 04:02:51.736707 3552 sql/sqlstats/persistedsqlstats/flush.go:68 ⋮ [n1,server drain process] 104 flushing 778 stmt/txn fingerprints (286720 bytes) after 1.24µs I260425 04:02:52.475316 3552 1@server/drain.go:285 ⋮ [n1,server drain process] 105 drain remaining: 322 I260425 04:02:52.475354 3552 1@server/drain.go:287 ⋮ [n1,server drain process] 106 drain details: SQL clients: 16, descriptor leases: 305, liveness record: 1 I260425 04:02:52.676011 3552 sql/sqlstats/persistedsqlstats/flush.go:68 ⋮ [n1,server drain process] 107 flushing 2 stmt/txn fingerprints (10240 bytes) after 650ns W260425 04:02:52.676750 3552 sql/sqlstats/persistedsqlstats/flush.go:170 ⋮ [n1,server drain process] 108 ‹failed to flush statement statistics›: flush statement ‹4025915790256006928›'s statistics: ‹insert-stmt-stats›: cannot acquire lease when draining W260425 04:02:52.676941 3552 sql/sqlstats/persistedsqlstats/flush.go:170 ⋮ [n1,server drain process] 109 ‹failed to flush transaction statistics›: flushing transaction ‹11006041404254834895›'s statistics: ‹insert-txn-stats›: cannot acquire lease when draining I260425 04:02:52.683644 3552 1@server/drain.go:285 ⋮ [n1,server drain process] 110 drain remaining: 0 I260425 04:02:52.683717 594 sql/stats/automatic_stats.go:489 ⋮ [n1] 111 ‹quiescing auto stats refresher› W260425 04:02:52.683753 599 jobs/registry.go:815 ⋮ [n1] 112 canceling all adopted jobs due to stopper quiescing W260425 04:02:52.683858 423 sql/sqlliveness/slinstance/slinstance.go:235 ⋮ [n1] 113 ‹exiting heartbeat loop› I260425 04:02:52.695989 1 1@cli/start.go:898 ⋮ [n1] 114 server drained and shutdown completed