I260130 18:27:09.712921 1 util/log/file_sync_buffer.go:238 ⋮ [config] file created at: 2026/01/30 18:27:09 I260130 18:27:09.712937 1 util/log/file_sync_buffer.go:238 ⋮ [config] running on machine: ‹bmat-EVT22200007-0000dbc8› I260130 18:27:09.712946 1 util/log/file_sync_buffer.go:238 ⋮ [config] binary: CockroachDB OSS v22.1.22-46-g367bca413b (x86_64-pc-solaris2.11, built 2025/10/08 03:53:50, go1.22.11) I260130 18:27:09.712953 1 util/log/file_sync_buffer.go:238 ⋮ [config] arguments: [‹cockroach› ‹start-single-node› ‹--insecure› ‹--http-addr=:0› ‹--store=path=/var/tmp/omicron_tmp/.tmp8HKhT0/data,ballast-size=0› ‹--listen-addr› ‹[::1]:0› ‹--listening-url-file› ‹/var/tmp/omicron_tmp/.tmp8HKhT0/listen-url›] I260130 18:27:09.712976 1 util/log/file_sync_buffer.go:238 ⋮ [config] log format (utf8=✓): crdb-v2 I260130 18:27:09.712981 1 util/log/file_sync_buffer.go:238 ⋮ [config] line format: [IWEF]yymmdd hh:mm:ss.uuuuuu goid [chan@]file:line redactionmark \[tags\] [counter] msg W260130 18:27:09.712328 1 1@cli/start.go:1186 ⋮ [n?] 1 ALL SECURITY CONTROLS HAVE BEEN DISABLED! W260130 18:27:09.712328 1 1@cli/start.go:1186 ⋮ [n?] 1 + W260130 18:27:09.712328 1 1@cli/start.go:1186 ⋮ [n?] 1 +This mode is intended for non-production testing only. W260130 18:27:09.712328 1 1@cli/start.go:1186 ⋮ [n?] 1 + W260130 18:27:09.712328 1 1@cli/start.go:1186 ⋮ [n?] 1 +In this mode: W260130 18:27:09.712328 1 1@cli/start.go:1186 ⋮ [n?] 1 +- Your cluster is open to any client that can access ‹::1›. W260130 18:27:09.712328 1 1@cli/start.go:1186 ⋮ [n?] 1 +- Intruders with access to your machine or network can observe client-server traffic. W260130 18:27:09.712328 1 1@cli/start.go:1186 ⋮ [n?] 1 +- Intruders can log in without password and read or write any data in the cluster. W260130 18:27:09.712328 1 1@cli/start.go:1186 ⋮ [n?] 1 +- Intruders can consume all your server's resources and cause unavailability. I260130 18:27:09.713254 1 1@cli/start.go:1196 ⋮ [n?] 2 To start a secure server without mandating TLS for clients, I260130 18:27:09.713254 1 1@cli/start.go:1196 ⋮ [n?] 2 +consider --accept-sql-without-tls instead. For other options, see: I260130 18:27:09.713254 1 1@cli/start.go:1196 ⋮ [n?] 2 + I260130 18:27:09.713254 1 1@cli/start.go:1196 ⋮ [n?] 2 +- ‹https://go.crdb.dev/issue-v/53404/v22.1› I260130 18:27:09.713254 1 1@cli/start.go:1196 ⋮ [n?] 2 +- https://www.cockroachlabs.com/docs/v22.1/secure-a-cluster.html W260130 18:27:09.713367 1 1@cli/start.go:1112 ⋮ [n?] 3 ‹Using the default setting for --cache (128 MiB).› W260130 18:27:09.713367 1 1@cli/start.go:1112 ⋮ [n?] 3 +‹ A significantly larger value is usually needed for good performance.› W260130 18:27:09.713367 1 1@cli/start.go:1112 ⋮ [n?] 3 +‹ If you have a dedicated server a reasonable setting is 25% of physical memory.› I260130 18:27:09.713432 1 1@cli/start.go:1225 ⋮ [n?] 4 ‹CockroachDB OSS v22.1.22-46-g367bca413b (x86_64-pc-solaris2.11, built 2025/10/08 03:53:50, go1.22.11)› I260130 18:27:09.714203 1 server/config.go:485 ⋮ [n?] 5 unable to retrieve system total memory: ‹not implemented on illumos› I260130 18:27:09.714322 1 server/config.go:489 ⋮ [n?] 6 server configuration: I260130 18:27:09.714322 1 server/config.go:489 ⋮ [n?] 6 +‹max offset 500000000› I260130 18:27:09.714322 1 server/config.go:489 ⋮ [n?] 6 +‹cache size 128 MiB› I260130 18:27:09.714322 1 server/config.go:489 ⋮ [n?] 6 +‹SQL memory pool size 128 MiB› I260130 18:27:09.714322 1 server/config.go:489 ⋮ [n?] 6 +‹scan interval 10m0s› I260130 18:27:09.714322 1 server/config.go:489 ⋮ [n?] 6 +‹scan min idle time 10ms› I260130 18:27:09.714322 1 server/config.go:489 ⋮ [n?] 6 +‹scan max idle time 1s› I260130 18:27:09.714322 1 server/config.go:489 ⋮ [n?] 6 +‹event log enabled true› I260130 18:27:09.714322 1 server/config.go:489 ⋮ [n?] 6 +‹span configs enabled true› I260130 18:27:09.714442 1 1@cli/start.go:1090 ⋮ [n?] 7 using local environment variables: I260130 18:27:09.714442 1 1@cli/start.go:1090 ⋮ [n?] 7 +GOTRACEBACK=crash I260130 18:27:09.714442 1 1@cli/start.go:1090 ⋮ [n?] 7 +LANG=‹en_US.UTF-8› I260130 18:27:09.714442 1 1@cli/start.go:1090 ⋮ [n?] 7 +LC_ALL=‹en_US.UTF-8› I260130 18:27:09.714442 1 1@cli/start.go:1090 ⋮ [n?] 7 +TZ=‹UTC› I260130 18:27:09.714510 1 1@cli/start.go:1097 ⋮ [n?] 8 process identity: ‹uid 12345 euid 12345 gid 12345 egid 12345› W260130 18:27:09.714949 1 1@cli/start.go:1310 ⋮ [n?] 9 could not initialize GEOS - spatial functions may not be available: geos: error during GEOS init: geos: cannot load GEOS from dir ‹"/usr/local/lib/cockroach"›: ‹geos error: ld.so.1: cockroach: fatal: /usr/local/lib/cockroach/libgeos.so: open failed: No such file or directory› I260130 18:27:09.715022 1 1@cli/start.go:581 ⋮ [n?] 10 starting cockroach node I260130 18:27:09.962286 8 server/config.go:665 ⋮ [n?] 11 1 storage engine‹› initialized I260130 18:27:09.962355 8 server/config.go:668 ⋮ [n?] 12 Pebble cache size: 128 MiB I260130 18:27:09.962371 8 server/config.go:668 ⋮ [n?] 13 store 0: max size 0 B, max open file limit 60536 I260130 18:27:09.962379 8 server/config.go:668 ⋮ [n?] 14 store 0: {Encrypted:false ReadOnly:false FileStoreProperties:{path=‹/var/tmp/omicron_tmp/.tmp8HKhT0/data›, fs=zfs, blkdev=‹rpool/work›, mnt=‹/work› opts=‹rw,devices,setuid,nonbmand,exec,xattr,atime,dev=4350006›}} E260130 18:27:09.962838 8 1@server/status/runtime.go:343 ⋮ [n?] 15 could not get initial disk IO counters: ‹not implemented yet› E260130 18:27:09.962942 8 1@server/status/runtime.go:347 ⋮ [n?] 16 could not get initial disk IO counters: ‹not implemented yet› I260130 18:27:09.965665 8 server/tracedumper/tracedumper.go:120 ⋮ [n?] 17 writing job trace dumps to ‹/var/tmp/omicron_tmp/.tmp8HKhT0/data/logs/inflight_trace_dump› I260130 18:27:10.084305 8 1@server/clock_monotonicity.go:65 ⋮ [n?] 18 monitoring forward clock jumps based on server.clock.forward_jump_check_enabled I260130 18:27:10.087399 8 1@server/clock_monotonicity.go:140 ⋮ [n1] 19 Sleeping till wall time 1769797630087394815 to catches up to 1769797630584260623 to ensure monotonicity. Delta: 496.865808ms I260130 18:27:10.087524 212 1@server/server.go:1319 ⋮ [n1] 20 connecting to gossip network to verify cluster ID ‹"90077c31-b16b-4835-acc3-78b1bc6a9a35"› I260130 18:27:10.584922 8 1@cli/start.go:521 ⋮ [n1] 21 listening URL file: ‹/var/tmp/omicron_tmp/.tmp8HKhT0/listen-url› W260130 18:27:10.585437 8 1@gossip/gossip.go:1531 ⋮ [n1] 22 no addresses found; use --join to specify a connected node I260130 18:27:10.585542 8 gossip/gossip.go:401 ⋮ [n1] 23 NodeDescriptor set to ‹node_id:1 address: attrs:<> locality:<> ServerVersion: build_tag:"v22.1.22-46-g367bca413b" started_at:1769797630585528721 cluster_name:"" sql_address: http_address:› I260130 18:27:10.613663 8 server/node.go:470 ⋮ [n1] 24 initialized store s1 I260130 18:27:10.613823 8 kv/kvserver/stores.go:250 ⋮ [n1] 25 read 0 node addresses from persistent storage I260130 18:27:10.614117 8 server/node.go:549 ⋮ [n1] 26 started with engine type ‹2› I260130 18:27:10.614149 8 server/node.go:551 ⋮ [n1] 27 started with attributes [] I260130 18:27:10.614318 8 server/goroutinedumper/goroutinedumper.go:122 ⋮ [n1] 28 writing goroutine dumps to ‹/var/tmp/omicron_tmp/.tmp8HKhT0/data/logs/goroutine_dump› I260130 18:27:10.614556 8 server/heapprofiler/heapprofiler.go:49 ⋮ [n1] 29 writing go heap profiles to ‹/var/tmp/omicron_tmp/.tmp8HKhT0/data/logs/heap_profiler› at least every 1h0m0s I260130 18:27:10.614578 8 server/heapprofiler/cgoprofiler.go:53 ⋮ [n1] 30 to enable jmalloc profiling: "export MALLOC_CONF=prof:true" or "ln -s prof:true /etc/malloc.conf" I260130 18:27:10.614591 8 server/heapprofiler/statsprofiler.go:54 ⋮ [n1] 31 writing memory stats to ‹/var/tmp/omicron_tmp/.tmp8HKhT0/data/logs/heap_profiler› at last every 1h0m0s I260130 18:27:10.614659 212 1@server/server.go:1322 ⋮ [n1] 32 node connected via gossip W260130 18:27:10.614659 8 server/env_sampler.go:121 ⋮ [n1] 33 failed to start query profiler worker: failed to read ‹memory› cgroup from cgroups file: ‹/proc/self/cgroup›: open ‹/proc/self/cgroup›: no such file or directory I260130 18:27:10.614951 8 1@server/server.go:1458 ⋮ [n1] 34 starting http server at ‹[::1]:62385› (use: ‹[::1]:62385›) I260130 18:27:10.614990 8 1@server/server.go:1465 ⋮ [n1] 35 starting grpc/postgres server at ‹[::1]:42257› I260130 18:27:10.615013 8 1@server/server.go:1466 ⋮ [n1] 36 advertising CockroachDB node at ‹[::1]:42257› E260130 18:27:10.615083 8 server/status/recorder.go:420 ⋮ [n1,summaries] 37 could not get total system memory: ‹not implemented on illumos› E260130 18:27:10.675881 157 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r26/1:‹/NamespaceTable/{30-Max}›] 38 1 matching stores are currently throttled: [‹s1: draining›] E260130 18:27:10.676619 161 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r3/1:‹/System/{NodeLive…-tsd}›] 39 1 matching stores are currently throttled: [‹s1: draining›] E260130 18:27:10.676782 324 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r2/1:‹/System/NodeLiveness{-Max}›] 40 1 matching stores are currently throttled: [‹s1: draining›] E260130 18:27:10.676842 324 kv/kvserver/queue.go:1106 ⋮ [n1,replicate,s1,r1/1:‹/{Min-System/NodeL…}›] 41 1 matching stores are currently throttled: [‹s1: draining›] I260130 18:27:10.676854 324 kv/kvserver/queue.go:1206 ⋮ [n1,replicate] 42 purgatory is now empty I260130 18:27:10.678971 359 kv/kvclient/rangefeed/rangefeedcache/watcher.go:316 ⋮ [n1] 43 spanconfig-subscriber: established range feed cache I260130 18:27:10.708623 8 1@util/log/event_log.go:32 ⋮ [n1] 44 ={"Timestamp":1769797630708608478,"EventType":"node_restart","NodeID":1,"StartedAt":1769797630585528721,"LastUp":1769790595848293505} I260130 18:27:10.708771 8 sql/sqlliveness/slinstance/slinstance.go:313 ⋮ [n1] 45 starting SQL liveness instance I260130 18:27:10.712867 861 sql/temporary_schema.go:514 ⋮ [n1] 46 running temporary object cleanup background job I260130 18:27:10.712932 863 sql/sqlstats/persistedsqlstats/provider.go:132 ⋮ [n1] 47 starting sql-stats-worker with initial delay: 11m13.219006488s I260130 18:27:10.713229 853 sql/sqlliveness/slstorage/slstorage.go:460 ⋮ [n1] 48 inserted sqlliveness session 7c7be19d2cfc4f0bad66e6bf895393fa I260130 18:27:10.713294 853 sql/sqlliveness/slinstance/slinstance.go:198 ⋮ [n1] 49 created new SQL liveness session 7c7be19d2cfc4f0bad66e6bf895393fa I260130 18:27:10.714099 938 kv/kvserver/replica_rangefeed.go:700 ⋮ [n1,rangefeed=‹lease›,s1,r6] 50 RangeFeed closed timestamp 1769790591.104202286,0 is behind by 1h57m19.609889385s I260130 18:27:10.714824 861 sql/temporary_schema.go:559 ⋮ [n1] 51 found 0 temporary schemas I260130 18:27:10.714884 861 sql/temporary_schema.go:562 ⋮ [n1] 52 early exiting temporary schema cleaner as no temporary schemas were found I260130 18:27:10.714899 861 sql/temporary_schema.go:563 ⋮ [n1] 53 completed temporary object cleanup job I260130 18:27:10.714910 949 kv/kvclient/rangefeed/rangefeedcache/watcher.go:316 ⋮ [n1] 54 settings-watcher: established range feed cache I260130 18:27:10.714910 861 sql/temporary_schema.go:646 ⋮ [n1] 55 temporary object cleaner next scheduled to run at 2026-01-30 18:57:10.712728608 +0000 UTC I260130 18:27:10.716218 997 kv/kvclient/rangefeed/rangefeedcache/watcher.go:316 ⋮ [n1] 56 system-config-cache: established range feed cache I260130 18:27:10.716311 8 server/server_sql.go:1295 ⋮ [n1] 57 done ensuring all necessary startup migrations have run I260130 18:27:10.717172 999 kv/kvclient/rangefeed/rangefeedcache/watcher.go:316 ⋮ [n1] 58 tenant-settings-watcher: established range feed cache I260130 18:27:10.717297 920 jobs/job_scheduler.go:433 ⋮ [n1] 59 waiting 4m0s before scheduled jobs daemon start I260130 18:27:10.725700 923 server/auto_upgrade.go:79 ⋮ [n1] 60 no need to upgrade, cluster already at the newest version I260130 18:27:10.726207 1012 sql/sqlliveness/slstorage/slstorage.go:342 ⋮ [n1,intExec=‹expire-sessions›] 61 deleted session 61e52994d9f64e30ac43b8718075f044 which expired at 1769790634.905595565,0 I260130 18:27:10.796962 8 1@server/server_sql.go:1365 ⋮ [n1] 62 serving sql connections I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 node startup completed: I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +CockroachDB node starting at 2026-01-30 18:27:10.797038914 +0000 UTC (took 1.1s) I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +build: OSS v22.1.22-46-g367bca413b @ 2025/10/08 03:53:50 (go1.22.11) I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +webui: ‹http://[::1]:62385› I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +sql: ‹postgresql://root@[::1]:42257/defaultdb?sslmode=disable› I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +sql (JDBC): ‹jdbc:postgresql://[::1]:42257/defaultdb?sslmode=disable&user=root› I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +RPC client flags: ‹cockroach --host=[::1]:42257 --insecure› I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +logs: ‹/var/tmp/omicron_tmp/.tmp8HKhT0/data/logs› I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +temp dir: ‹/var/tmp/omicron_tmp/.tmp8HKhT0/data/cockroach-temp1980340314› I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +external I/O path: ‹/var/tmp/omicron_tmp/.tmp8HKhT0/data/extern› I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +store[0]: ‹path=/var/tmp/omicron_tmp/.tmp8HKhT0/data› I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +storage engine: pebble I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +clusterID: ‹90077c31-b16b-4835-acc3-78b1bc6a9a35› I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +status: restarted pre-existing node I260130 18:27:10.797194 8 1@cli/start.go:1033 ⋮ [n1] 63 +nodeID: 1 I260130 18:27:10.884218 1170 util/log/event_log.go:32 ⋮ [n1,client=[::1]:55071,user=root] 64 ={"Timestamp":1769797630853262556,"EventType":"set_cluster_setting","Statement":"SET CLUSTER SETTING \"kv.raft_log.disable_synchronization_unsafe\" = true","Tag":"SET CLUSTER SETTING","User":"root","SettingName":"kv.raft_log.disable_synchronization_unsafe","Value":"‹true›"} I260130 18:27:12.546650 1087 server/diagnostics/update_checker.go:150 ⋮ [n1] 65 a new version is available: ‹26.1.0-rc.1›, details: ‹https://www.cockroachlabs.com/docs/releases/v26.1#v26-1-0-rc-1› I260130 18:27:12.994134 88 1@gossip/gossip.go:1547 ⋮ [n1] 66 node has connected to cluster via gossip I260130 18:27:12.994813 88 kv/kvserver/stores.go:269 ⋮ [n1] 67 wrote 0 node addresses to persistent storage E260130 18:27:14.170938 1088 server/status/recorder.go:420 ⋮ [n1] 68 could not get total system memory: ‹not implemented on illumos› E260130 18:27:20.658337 218 1@server/status/runtime.go:454 ⋮ [n1] 69 unable to get mem usage: ‹not implemented on illumos› E260130 18:27:20.658633 218 1@server/status/runtime.go:458 ⋮ [n1] 70 unable to get cpu usage: ‹not implemented on illumos› W260130 18:27:20.673125 218 1@server/status/runtime.go:468 ⋮ [n1] 71 unable to get file descriptor usage (will not try again): ‹not implemented on illumos› W260130 18:27:20.673330 218 1@server/status/runtime.go:478 ⋮ [n1] 72 problem fetching disk stats: ‹not implemented yet›; disk stats will be empty. W260130 18:27:20.673390 218 1@server/status/runtime.go:499 ⋮ [n1] 73 problem fetching net stats: ‹not implemented yet›; net stats will be empty. E260130 18:27:20.696594 357 server/status/recorder.go:420 ⋮ [n1,summaries] 74 could not get total system memory: ‹not implemented on illumos› I260130 18:27:26.052219 1168 util/log/event_log.go:32 ⋮ [n1,client=[::1]:40452,user=root] 75 ={"Timestamp":1769797646046857811,"EventType":"set_cluster_setting","Statement":"SET CLUSTER SETTING \"cluster.preserve_downgrade_option\" = CASE encode(digest(crdb_internal.active_version() || crdb_internal.node_executable_version(), ‹'sha1'›), ‹'hex'›) = $1 WHEN ‹true› THEN $2 ELSE ‹NULL› END","Tag":"SET CLUSTER SETTING","User":"root","PlaceholderValues":["‹'d4d87aa2ad877a4cc2fddd0573952362739110de'›","‹'22.1'›"],"SettingName":"cluster.preserve_downgrade_option","Value":"‹22.1›"} I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 +__level_____count____size___score______in__ingest(sz_cnt)____move(sz_cnt)___write(sz_cnt)____read___r-amp___w-amp I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + WAL 2 1.7 M - 1.6 M - - - - 1.7 M - - - 1.0 I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + 0 0 0 B 0.00 0 B 0 B 0 0 B 0 0 B 0 0 B 0 0.0 I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + 1 0 0 B 0.00 0 B 0 B 0 0 B 0 0 B 0 0 B 0 0.0 I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + 2 0 0 B 0.00 0 B 0 B 0 0 B 0 0 B 0 0 B 0 0.0 I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + 3 0 0 B 0.00 0 B 0 B 0 0 B 0 0 B 0 0 B 0 0.0 I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + 4 0 0 B 0.00 0 B 0 B 0 0 B 0 0 B 0 0 B 0 0.0 I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + 5 0 0 B 0.00 0 B 0 B 0 0 B 0 0 B 0 0 B 0 0.0 I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + 6 1 924 K - 1.5 M 0 B 0 0 B 0 924 K 1 1.5 M 1 0.6 I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + total 1 924 K - 1.7 M 0 B 0 0 B 0 2.6 M 1 1.5 M 1 1.5 I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + flush 0 I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 +compact 1 0 B 1.5 M 0 (size == estimated-debt, score = in-progress-bytes, in = num-in-progress) I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + ctype 1 0 0 0 0 0 (default, delete, elision, move, read, rewrite) I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + memtbl 2 64 M I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 +zmemtbl 0 0 B I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + ztbl 0 0 B I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + bcache 54 1.8 M 98.9% (score == hit-rate) I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + tcache 1 680 B 99.9% (score == hit-rate) I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + snaps 0 - 0 (score == earliest seq num) I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + titers 0 I260130 18:27:30.623543 213 kv/kvserver/store.go:3290 ⋮ [n1,s1] 76 + filter - - 0.0% (score == utility) E260130 18:27:30.645757 218 1@server/status/runtime.go:454 ⋮ [n1] 77 unable to get mem usage: ‹not implemented on illumos› E260130 18:27:30.645890 218 1@server/status/runtime.go:458 ⋮ [n1] 78 unable to get cpu usage: ‹not implemented on illumos› W260130 18:27:30.645970 218 1@server/status/runtime.go:478 ⋮ [n1] 79 problem fetching disk stats: ‹not implemented yet›; disk stats will be empty. W260130 18:27:30.645994 218 1@server/status/runtime.go:499 ⋮ [n1] 80 problem fetching net stats: ‹not implemented yet›; net stats will be empty. E260130 18:27:30.678929 357 server/status/recorder.go:420 ⋮ [n1,summaries] 81 could not get total system memory: ‹not implemented on illumos› I260130 18:27:35.633810 1 1@cli/start.go:755 ⋮ [n1] 82 received signal 'terminated' I260130 18:27:35.633936 1 1@cli/start.go:852 ⋮ [n1] 83 initiating graceful shutdown of server I260130 18:27:35.634117 4264 1@server/drain.go:342 ⋮ [n1,server drain process] 84 waiting for health probes to notice that the node is not ready for new sql connections I260130 18:27:35.636380 4264 1@sql/pgwire/server.go:612 ⋮ [n1,server drain process] 85 starting draining SQL connections I260130 18:27:35.707791 2176 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:64877,user=root] 86 closing existing connection while server is draining I260130 18:27:35.708017 2173 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:40950,user=root] 87 closing existing connection while server is draining I260130 18:27:35.708134 2172 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:45018,user=root] 88 closing existing connection while server is draining I260130 18:27:35.708258 1236 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:39608,user=root] 89 closing existing connection while server is draining I260130 18:27:35.708888 2200 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:53716,user=root] 90 closing existing connection while server is draining I260130 18:27:35.713150 2158 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:46394,user=root] 91 closing existing connection while server is draining I260130 18:27:35.713013 2210 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:39555,user=root] 92 closing existing connection while server is draining I260130 18:27:35.714053 1104 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:49503,user=root] 93 closing existing connection while server is draining I260130 18:27:35.714397 1167 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:40452,user=root] 94 closing existing connection while server is draining I260130 18:27:35.714674 1101 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:34811,user=root] 95 closing existing connection while server is draining I260130 18:27:35.714916 1073 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:46822,user=root] 96 closing existing connection while server is draining I260130 18:27:35.715037 1169 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:43552,user=root] 97 closing existing connection while server is draining I260130 18:27:35.715275 1316 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:64243,user=root] 98 closing existing connection while server is draining I260130 18:27:35.715578 1072 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:44555,user=root] 99 closing existing connection while server is draining I260130 18:27:35.716076 1258 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:41560,user=root] 100 closing existing connection while server is draining I260130 18:27:35.716564 2144 1@sql/pgwire/conn.go:601 ⋮ [n1,client=[::1]:63486,user=root] 101 closing existing connection while server is draining I260130 18:27:35.716906 4264 sql/sqlstats/persistedsqlstats/flush.go:68 ⋮ [n1,server drain process] 102 flushing 445 stmt/txn fingerprints (245760 bytes) after 1.934µs I260130 18:27:37.389201 970 jobs/adopt.go:106 ⋮ [n1] 103 claimed 1 jobs I260130 18:27:37.393250 5187 jobs/adopt.go:241 ⋮ [n1] 104 job 1145963118284046337: resuming execution I260130 18:27:37.399091 5203 jobs/registry.go:1209 ⋮ [n1] 105 AUTO SPAN CONFIG RECONCILIATION job 1145963118284046337: stepping through state running with error: I260130 18:27:37.935736 5203 spanconfig/spanconfigsqlwatcher/sqlwatcher.go:264 ⋮ [n1,job=1145963118284046337] 106 established range feed over system.descriptors starting at time 1769797657.419437913,0 I260130 18:27:37.935930 5203 spanconfig/spanconfigsqlwatcher/sqlwatcher.go:317 ⋮ [n1,job=1145963118284046337] 107 established range feed over system.zones starting at time 1769797657.419437913,0 I260130 18:27:37.936046 5203 spanconfig/spanconfigsqlwatcher/sqlwatcher.go:412 ⋮ [n1,job=1145963118284046337] 108 established range feed over system.protected_ts_records starting at time 1769797657.419437913,0 I260130 18:27:38.616949 4264 1@server/drain.go:285 ⋮ [n1,server drain process] 109 drain remaining: 262 I260130 18:27:38.617119 4264 1@server/drain.go:287 ⋮ [n1,server drain process] 110 drain details: SQL clients: 16, descriptor leases: 245, liveness record: 1 I260130 18:27:38.817328 4264 sql/sqlstats/persistedsqlstats/flush.go:68 ⋮ [n1,server drain process] 111 flushing 2 stmt/txn fingerprints (10240 bytes) after 2.656µs W260130 18:27:38.820413 4264 sql/sqlstats/persistedsqlstats/flush.go:170 ⋮ [n1,server drain process] 112 ‹failed to flush statement statistics›: flush statement ‹4025915790256006928›'s statistics: ‹insert-stmt-stats›: cannot acquire lease when draining W260130 18:27:38.821350 4264 sql/sqlstats/persistedsqlstats/flush.go:170 ⋮ [n1,server drain process] 113 ‹failed to flush transaction statistics›: flushing transaction ‹11006041404254834895›'s statistics: ‹insert-txn-stats›: cannot acquire lease when draining I260130 18:27:38.824424 4264 1@server/drain.go:285 ⋮ [n1,server drain process] 114 drain remaining: 0 W260130 18:27:38.825308 853 sql/sqlliveness/slinstance/slinstance.go:235 ⋮ [n1] 115 exiting heartbeat loop W260130 18:27:38.825742 968 jobs/registry.go:815 ⋮ [n1] 116 canceling all adopted jobs due to stopper quiescing I260130 18:27:38.825748 963 sql/stats/automatic_stats.go:489 ⋮ [n1] 117 quiescing auto stats refresher I260130 18:27:38.826578 5203 jobs/registry.go:1209 ⋮ [n1] 118 AUTO SPAN CONFIG RECONCILIATION job 1145963118284046337: stepping through state succeeded with error: W260130 18:27:38.826809 5203 kv/txn.go:728 ⋮ [n1] 119 failure aborting transaction: ‹node unavailable; try another peer›; abort caused by: txn exec: context canceled W260130 18:27:38.827995 5203 jobs/adopt.go:459 ⋮ [n1] 120 could not clear job claim: ‹clear-job-claim: failed to send RPC: sending to all replicas failed; last error: node unavailable; try another peer› I260130 18:27:38.831793 1 1@cli/start.go:904 ⋮ [n1] 121 server drained and shutdown completed