Editing crashreport #67399

ReasonCrashing FunctionWhere to cut BacktraceReports Count
ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failedosp_sync_send_new_rpcosp_sync_send_new_rpc
osp_sync_process_record
osp_sync_process_queues
llog_process_thread
llog_process_or_fork
llog_cat_process_cb
llog_process_thread
llog_process_or_fork
llog_cat_process_or_fork
llog_cat_process
osp_sync_thread
kthread
14

Added fields:

Match messages in logs
(every line would be required to be present in log output
Copy from "Messages before crash" column below):
Match messages in full crash
(every line would be required to be present in crash log output
Copy from "Full Crash" column below):
Limit to a test:
(Copy from below "Failing text"):
Delete these reports as invalid (real bug in review or some such)
Bug or comment:
Extra info:

Failures list (last 100):

Failing TestFull CrashMessages before crashComment
sanity test 115: verify dynamic thread creation
LustreError: 21315:0:(osp_sync.c:644:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 21315:0:(osp_sync.c:644:osp_sync_send_new_rpc()) LBUG
Pid: 21315, comm: osp-syn-2-0 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] libcfs_call_trace+0x90/0xf0 [libcfs]
[<0>] lbug_with_loc+0x4c/0xa0 [libcfs]
[<0>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<0>] osp_sync_process_record+0x3e9/0x1040 [osp]
[<0>] osp_sync_process_queues+0x4b4/0xde0 [osp]
[<0>] llog_process_thread+0x86e/0x1c00 [obdclass]
[<0>] llog_process_or_fork+0xdc/0x580 [obdclass]
[<0>] llog_cat_process_cb+0x2d1/0x2e0 [obdclass]
[<0>] llog_process_thread+0x86e/0x1c00 [obdclass]
[<0>] llog_process_or_fork+0xdc/0x580 [obdclass]
[<0>] llog_cat_process_or_fork+0x211/0x3b0 [obdclass]
[<0>] llog_cat_process+0x2e/0x30 [obdclass]
[<0>] osp_sync_thread+0x1a4/0xc40 [osp]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: Unmounted lustre-client
Lustre: Skipped 1 previous similar message
Lustre: server umount lustre-MDT0000 complete
LustreError: 18617:0:(ldlm_lockd.c:2527:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1746394410 with bad export cookie 13149049789622009287
LustreError: 166-1: MGC192.168.123.130@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail
Lustre: 17535:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1746394409/real 1746394409] req@ffff88025dca0680 x1831223938264000/t0(0) o400->lustre-MDT0000-lwp-OST0002@0@lo:12/10 lens 224/224 e 0 to 1 dl 1746394416 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u32:1.0'
Lustre: lustre-MDT0000-lwp-OST0002: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
Lustre: server umount lustre-OST0000 complete
Lustre: server umount lustre-OST0001 complete
Lustre: 17525:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1746394414/real 1746394414] req@ffff88025dca4b40 x1831223938264448/t0(0) o400->lustre-MDT0000-lwp-OST0003@0@lo:12/10 lens 224/224 e 0 to 1 dl 1746394421 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u32:0.0'
Lustre: 17525:0:(client.c:2295:ptlrpc_expire_one_request()) Skipped 3 previous similar messages
Lustre: server umount lustre-OST0002 complete
Lustre: 17525:0:(client.c:2295:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1746394415/real 1746394415] req@ffff88025dca5180 x1831223938264640/t0(0) o400->lustre-MDT0000-lwp-OST0003@0@lo:12/10 lens 224/224 e 0 to 1 dl 1746394422 ref 1 fl Rpc:XNQr/0/ffffffff rc 0/-1 job:'kworker/u32:0.0'
Lustre: server umount lustre-OST0003 complete
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180
mount.lustre (21892) used greatest stack depth: 9888 bytes left
LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: lustre-OST0000: deleting orphan objects from 0x0:23716 to 0x0:23745
LustreError: 137-5: lustre-OST0002_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 2 previous similar messages
Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180
Lustre: lustre-OST0001: deleting orphan objects from 0x0:24804 to 0x0:24833
LustreError: 137-5: lustre-OST0002_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 1 previous similar message
Lustre: lustre-OST0002: Imperative Recovery not enabled, recovery window 60-180
Lustre: lustre-OST0002: deleting orphan objects from 0x0:24580 to 0x0:24609
LustreError: 137-5: lustre-OST0003_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 2 previous similar messages
Lustre: lustre-OST0003: deleting orphan objects from 0x0:18084 to 0x0:18113
Lustre: Mounted lustre-client
Lustre: Skipped 1 previous similar message
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
conf-sanity test 114: verify dynamic thread creation
LustreError: 10475:0:(osp_sync.c:666:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 10475:0:(osp_sync.c:666:osp_sync_send_new_rpc()) LBUG
CPU: 3 PID: 10475 Comm: osp-syn-1-0 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #2
Hardware name: Red Hat KVM, BIOS 1.16.0-3.module_el8.7.0+1218+f626c2ff 04/01/2014
Call Trace:
[<ffffffff817d93f8>] dump_stack+0x19/0x1b
[<ffffffffa021da9d>] lbug_with_loc+0x4d/0xb0 [libcfs]
[<ffffffffa14b75e4>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<ffffffffa14bcc22>] osp_sync_process_record+0x3e2/0xf10 [osp]
[<ffffffffa14bd8dc>] osp_sync_process_queues+0x18c/0x28b0 [osp]
[<ffffffff817e324e>] ? _raw_spin_unlock+0xe/0x20
[<ffffffffa02204a4>] ? libcfs_debug_msg+0x6d4/0xc20 [libcfs]
[<ffffffffa036e6a4>] ? llog_process_thread+0x104/0x1c60 [obdclass]
[<ffffffffa036f236>] llog_process_thread+0xc96/0x1c60 [obdclass]
[<ffffffffa036c469>] ? llog_handle_get+0x19/0x30 [obdclass]
[<ffffffffa14bd750>] ? osp_sync_process_record+0xf10/0xf10 [osp]
[<ffffffffa03702e8>] llog_process_or_fork+0xe8/0x590 [obdclass]
[<ffffffffa0375c21>] ? llog_cat_process_common+0x121/0x460 [obdclass]
[<ffffffffa0376ee9>] llog_cat_process_cb+0x339/0x350 [obdclass]
[<ffffffffa036f236>] llog_process_thread+0xc96/0x1c60 [obdclass]
[<ffffffffa0376bb0>] ? llog_cat_cancel_records+0x140/0x140 [obdclass]
[<ffffffffa03702e8>] llog_process_or_fork+0xe8/0x590 [obdclass]
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffffa0372ed9>] llog_cat_process_or_fork+0x119/0x460 [obdclass]
[<ffffffff810d03f2>] ? default_wake_function+0x12/0x20
[<ffffffffa14bd750>] ? osp_sync_process_record+0xf10/0xf10 [osp]
[<ffffffffa037324e>] llog_cat_process+0x2e/0x30 [obdclass]
[<ffffffffa14ba20d>] osp_sync_thread+0x18d/0xec0 [osp]
[<ffffffff810c834d>] ? finish_task_switch+0x5d/0x1b0
[<ffffffff817e05ca>] ? __schedule+0x32a/0x7d0
[<ffffffffa14ba080>] ? osp_sync_process_committed+0xce0/0xce0 [osp]
[<ffffffff810ba114>] kthread+0xe4/0xf0
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
[<ffffffff817ede5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
Lustre: DEBUG MARKER: conf-sanity test_114: @@@@@@ FAIL: LBUG/LASSERT detected
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 4 previous similar messages
Lustre: Modifying parameter general.debug_raw_pointers in log params
LustreError: lustre-OST0003: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3 to 0x280000400:97)
Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:36 to 0x240000400:129)
Lustre: Mounted lustre-client
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
conf-sanity test 114: verify dynamic thread creation
LustreError: 4959:0:(osp_sync.c:686:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 4959:0:(osp_sync.c:686:osp_sync_send_new_rpc()) LBUG
CPU: 5 PID: 4959 Comm: osp-syn-3-0 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #2
Hardware name: Red Hat KVM, BIOS 1.16.0-3.module_el8.7.0+1218+f626c2ff 04/01/2014
Call Trace:
[<ffffffff817d93f8>] dump_stack+0x19/0x1b
[<ffffffffa024ba9d>] lbug_with_loc+0x4d/0xb0 [libcfs]
[<ffffffffa147e5e4>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<ffffffffa1483c22>] osp_sync_process_record+0x3e2/0xf10 [osp]
[<ffffffffa14848dc>] osp_sync_process_queues+0x18c/0x28b0 [osp]
[<ffffffff817e324e>] ? _raw_spin_unlock+0xe/0x20
[<ffffffffa024e4a4>] ? libcfs_debug_msg+0x6d4/0xc20 [libcfs]
[<ffffffffa038d6a4>] ? llog_process_thread+0x104/0x1c60 [obdclass]
[<ffffffffa038e236>] llog_process_thread+0xc96/0x1c60 [obdclass]
[<ffffffffa038b469>] ? llog_handle_get+0x19/0x30 [obdclass]
[<ffffffffa1484750>] ? osp_sync_process_record+0xf10/0xf10 [osp]
[<ffffffffa038f2e8>] llog_process_or_fork+0xe8/0x590 [obdclass]
[<ffffffffa0394c21>] ? llog_cat_process_common+0x121/0x460 [obdclass]
[<ffffffffa0395ee9>] llog_cat_process_cb+0x339/0x350 [obdclass]
[<ffffffffa038e236>] llog_process_thread+0xc96/0x1c60 [obdclass]
[<ffffffffa0395bb0>] ? llog_cat_cancel_records+0x140/0x140 [obdclass]
[<ffffffffa038f2e8>] llog_process_or_fork+0xe8/0x590 [obdclass]
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffffa0391ed9>] llog_cat_process_or_fork+0x119/0x460 [obdclass]
[<ffffffff810d03f2>] ? default_wake_function+0x12/0x20
[<ffffffffa1484750>] ? osp_sync_process_record+0xf10/0xf10 [osp]
[<ffffffffa039224e>] llog_cat_process+0x2e/0x30 [obdclass]
[<ffffffffa148120d>] osp_sync_thread+0x18d/0xec0 [osp]
[<ffffffff810c834d>] ? finish_task_switch+0x5d/0x1b0
[<ffffffff817e05ca>] ? __schedule+0x32a/0x7d0
[<ffffffffa1481080>] ? osp_sync_process_committed+0xce0/0xce0 [osp]
[<ffffffff810ba114>] kthread+0xe4/0xf0
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
[<ffffffff817ede5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
Lustre: DEBUG MARKER: conf-sanity test_114: @@@@@@ FAIL: LBUG/LASSERT detected
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 4 previous similar messages
Lustre: Modifying parameter general.debug_raw_pointers in log params
LustreError: lustre-OST0003: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 2 previous similar messages
Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:36 to 0x240000400:129)
Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3 to 0x280000400:97)
Lustre: Mounted lustre-client
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
conf-sanity test 114: verify dynamic thread creation
LustreError: 5022:0:(osp_sync.c:686:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 5022:0:(osp_sync.c:686:osp_sync_send_new_rpc()) LBUG
CPU: 5 PID: 5022 Comm: osp-syn-1-0 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #2
Hardware name: Red Hat KVM, BIOS 1.16.0-3.module_el8.7.0+1218+f626c2ff 04/01/2014
Call Trace:
[<ffffffff817d93f8>] dump_stack+0x19/0x1b
[<ffffffffa0214a9d>] lbug_with_loc+0x4d/0xb0 [libcfs]
[<ffffffffa14bf5e4>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<ffffffffa14c4c02>] osp_sync_process_record+0x3e2/0xf10 [osp]
[<ffffffffa14c58bc>] osp_sync_process_queues+0x18c/0x28b0 [osp]
[<ffffffff817e324e>] ? _raw_spin_unlock+0xe/0x20
[<ffffffffa02174a4>] ? libcfs_debug_msg+0x6d4/0xc20 [libcfs]
[<ffffffffa04126a4>] ? llog_process_thread+0x104/0x1c60 [obdclass]
[<ffffffffa0413236>] llog_process_thread+0xc96/0x1c60 [obdclass]
[<ffffffffa0410469>] ? llog_handle_get+0x19/0x30 [obdclass]
[<ffffffffa14c5730>] ? osp_sync_process_record+0xf10/0xf10 [osp]
[<ffffffffa04142e8>] llog_process_or_fork+0xe8/0x590 [obdclass]
[<ffffffffa0419c21>] ? llog_cat_process_common+0x121/0x460 [obdclass]
[<ffffffffa041aee9>] llog_cat_process_cb+0x339/0x350 [obdclass]
[<ffffffffa0413236>] llog_process_thread+0xc96/0x1c60 [obdclass]
[<ffffffffa041abb0>] ? llog_cat_cancel_records+0x140/0x140 [obdclass]
[<ffffffffa04142e8>] llog_process_or_fork+0xe8/0x590 [obdclass]
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffffa0416ed9>] llog_cat_process_or_fork+0x119/0x460 [obdclass]
[<ffffffff810d03f2>] ? default_wake_function+0x12/0x20
[<ffffffffa14c5730>] ? osp_sync_process_record+0xf10/0xf10 [osp]
[<ffffffffa041724e>] llog_cat_process+0x2e/0x30 [obdclass]
[<ffffffffa14c220d>] osp_sync_thread+0x18d/0xec0 [osp]
[<ffffffff810c834d>] ? finish_task_switch+0x5d/0x1b0
[<ffffffff817e05ca>] ? __schedule+0x32a/0x7d0
[<ffffffffa14c2080>] ? osp_sync_process_committed+0xce0/0xce0 [osp]
[<ffffffff810ba114>] kthread+0xe4/0xf0
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
[<ffffffff817ede5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
Lustre: DEBUG MARKER: conf-sanity test_114: @@@@@@ FAIL: LBUG/LASSERT detected
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 4 previous similar messages
Lustre: Modifying parameter general.debug_raw_pointers in log params
LustreError: lustre-OST0002: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3 to 0x280000400:97)
Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:68 to 0x240000400:161)
Lustre: Mounted lustre-client
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
conf-sanity test 114: verify dynamic thread creation
LustreError: 532:0:(osp_sync.c:631:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 532:0:(osp_sync.c:631:osp_sync_send_new_rpc()) LBUG
CPU: 7 PID: 532 Comm: osp-syn-2-0 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #2
Hardware name: Red Hat KVM, BIOS 1.16.0-3.module_el8.7.0+1218+f626c2ff 04/01/2014
Call Trace:
[<ffffffff817d93f8>] dump_stack+0x19/0x1b
[<ffffffffa0228a9d>] lbug_with_loc+0x4d/0xa0 [libcfs]
[<ffffffffa14c3844>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<ffffffffa14c8709>] osp_sync_process_record+0x3e9/0xf20 [osp]
[<ffffffffa0649780>] ? lustre_swab_niobuf_remote+0x30/0x30 [ptlrpc]
[<ffffffffa14c96c4>] osp_sync_process_queues+0x484/0xde0 [osp]
[<ffffffffa03a6226>] llog_process_thread+0xc96/0x1c60 [obdclass]
[<ffffffffa03a3459>] ? llog_handle_get+0x19/0x30 [obdclass]
[<ffffffffa14c9240>] ? osp_sync_process_record+0xf20/0xf20 [osp]
[<ffffffffa03a72d8>] llog_process_or_fork+0xe8/0x590 [obdclass]
[<ffffffffa03ac7b1>] ? llog_cat_process_common+0x121/0x460 [obdclass]
[<ffffffffa03ada79>] llog_cat_process_cb+0x339/0x350 [obdclass]
[<ffffffffa03a6226>] llog_process_thread+0xc96/0x1c60 [obdclass]
[<ffffffffa03ad740>] ? llog_cat_cancel_records+0x140/0x140 [obdclass]
[<ffffffffa03a72d8>] llog_process_or_fork+0xe8/0x590 [obdclass]
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffffa03a9a69>] llog_cat_process_or_fork+0x119/0x460 [obdclass]
[<ffffffff810d03f2>] ? default_wake_function+0x12/0x20
[<ffffffffa14c9240>] ? osp_sync_process_record+0xf20/0xf20 [osp]
[<ffffffffa03a9dde>] llog_cat_process+0x2e/0x30 [obdclass]
[<ffffffffa14c5fec>] osp_sync_thread+0x18c/0xc00 [osp]
[<ffffffff810c834d>] ? finish_task_switch+0x5d/0x1b0
[<ffffffff817e05ca>] ? __schedule+0x32a/0x7d0
[<ffffffffa14c5e60>] ? osp_sync_process_committed+0xce0/0xce0 [osp]
[<ffffffff810ba114>] kthread+0xe4/0xf0
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
[<ffffffff817ede5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 4 previous similar messages
Lustre: Modifying parameter general.debug_raw_pointers in log params
LustreError: 137-5: lustre-OST0003: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: 532:0:(llog_cat.c:737:llog_cat_cancel_arr_rec()) lustre-OST0002-osc-MDT0000: fail to cancel 1 records in [0x1:0x22:0x0]: rc = 0
Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:36 to 0x240000400:129)
Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3 to 0x280000400:97)
Lustre: Mounted lustre-client
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
conf-sanity test 114: verify dynamic thread creation
LustreError: 10300:0:(osp_sync.c:631:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 10300:0:(osp_sync.c:631:osp_sync_send_new_rpc()) LBUG
CPU: 3 PID: 10300 Comm: osp-syn-2-0 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #2
Hardware name: Red Hat KVM, BIOS 1.16.0-3.module_el8.7.0+1218+f626c2ff 04/01/2014
Call Trace:
[<ffffffff817d93f8>] dump_stack+0x19/0x1b
[<ffffffffa0237a9d>] lbug_with_loc+0x4d/0xa0 [libcfs]
[<ffffffffa14a5874>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<ffffffffa14aa7a9>] osp_sync_process_record+0x3e9/0xf20 [osp]
[<ffffffffa0660870>] ? lustre_swab_niobuf_remote+0x30/0x30 [ptlrpc]
[<ffffffffa14ab764>] osp_sync_process_queues+0x484/0xde0 [osp]
[<ffffffffa03bc226>] llog_process_thread+0xc96/0x1c60 [obdclass]
[<ffffffffa03b9459>] ? llog_handle_get+0x19/0x30 [obdclass]
[<ffffffffa14ab2e0>] ? osp_sync_process_record+0xf20/0xf20 [osp]
[<ffffffffa03bd2d8>] llog_process_or_fork+0xe8/0x590 [obdclass]
[<ffffffffa03c27b1>] ? llog_cat_process_common+0x121/0x460 [obdclass]
[<ffffffffa03c3ad9>] llog_cat_process_cb+0x339/0x350 [obdclass]
[<ffffffffa03bc226>] llog_process_thread+0xc96/0x1c60 [obdclass]
[<ffffffffa03c37a0>] ? llog_cat_cancel_records+0x1d0/0x1d0 [obdclass]
[<ffffffffa03bd2d8>] llog_process_or_fork+0xe8/0x590 [obdclass]
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffffa03bfa69>] llog_cat_process_or_fork+0x119/0x460 [obdclass]
[<ffffffff810d03f2>] ? default_wake_function+0x12/0x20
[<ffffffffa14ab2e0>] ? osp_sync_process_record+0xf20/0xf20 [osp]
[<ffffffffa03bfdde>] llog_cat_process+0x2e/0x30 [obdclass]
[<ffffffffa14a808c>] osp_sync_thread+0x18c/0xc00 [osp]
[<ffffffff810c834d>] ? finish_task_switch+0x5d/0x1b0
[<ffffffff817e05ca>] ? __schedule+0x32a/0x7d0
[<ffffffffa14a7f00>] ? osp_sync_process_committed+0xd50/0xd50 [osp]
[<ffffffff810ba114>] kthread+0xe4/0xf0
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
[<ffffffff817ede5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 4 previous similar messages
LustreError: 137-5: lustre-OST0003: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: lustre-OST0000: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x240000400:36 to 0x240000400:129)
Lustre: lustre-OST0001: new connection from lustre-MDT0000-mdtlov (cleaning up unused objects from 0x280000400:3 to 0x280000400:97)
Lustre: Mounted lustre-client
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
conf-sanity test 114: verify dynamic thread creation
LustreError: 1078:0:(osp_sync.c:631:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 1078:0:(osp_sync.c:631:osp_sync_send_new_rpc()) LBUG
CPU: 0 PID: 1078 Comm: osp-syn-0-0 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #2
Hardware name: Red Hat KVM, BIOS 1.16.0-3.module_el8.7.0+1218+f626c2ff 04/01/2014
Call Trace:
[<ffffffff817d93f8>] dump_stack+0x19/0x1b
[<ffffffffa023aa9d>] lbug_with_loc+0x4d/0xa0 [libcfs]
[<ffffffffa1429934>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<ffffffffa142e879>] osp_sync_process_record+0x3e9/0xf20 [osp]
[<ffffffffa0631760>] ? lustre_swab_niobuf_remote+0x30/0x30 [ptlrpc]
[<ffffffffa142f834>] osp_sync_process_queues+0x484/0xde0 [osp]
[<ffffffffa039a1c6>] llog_process_thread+0xc66/0x1c20 [obdclass]
[<ffffffffa142f3b0>] ? osp_sync_process_record+0xf20/0xf20 [osp]
[<ffffffffa039b265>] llog_process_or_fork+0xe5/0x580 [obdclass]
[<ffffffffa03a0731>] ? llog_cat_process_common+0x121/0x460 [obdclass]
[<ffffffffa03a1a59>] llog_cat_process_cb+0x339/0x350 [obdclass]
[<ffffffffa039a1c6>] llog_process_thread+0xc66/0x1c20 [obdclass]
[<ffffffffa03a1720>] ? llog_cat_cancel_records+0x1d0/0x1d0 [obdclass]
[<ffffffffa039b265>] llog_process_or_fork+0xe5/0x580 [obdclass]
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffffa039d9f9>] llog_cat_process_or_fork+0x119/0x460 [obdclass]
[<ffffffff810d03f2>] ? default_wake_function+0x12/0x20
[<ffffffffa142f3b0>] ? osp_sync_process_record+0xf20/0xf20 [osp]
[<ffffffffa039dd6e>] llog_cat_process+0x2e/0x30 [obdclass]
[<ffffffffa142c13e>] osp_sync_thread+0x18e/0xc20 [osp]
[<ffffffff810c834d>] ? finish_task_switch+0x5d/0x1b0
[<ffffffff817e05ca>] ? __schedule+0x32a/0x7d0
[<ffffffffa142bfb0>] ? osp_sync_process_committed+0xd40/0xd40 [osp]
[<ffffffff810ba114>] kthread+0xe4/0xf0
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
[<ffffffff817ede5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 4 previous similar messages
LustreError: 137-5: lustre-OST0002_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: lustre-OST0000: deleting orphan objects from 0x240000400:68 to 0x240000400:161
Lustre: lustre-OST0001: deleting orphan objects from 0x280000400:3 to 0x280000400:97
Lustre: Mounted lustre-client
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
conf-sanity test 114: verify dynamic thread creation
LustreError: 15468:0:(osp_sync.c:631:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 15468:0:(osp_sync.c:631:osp_sync_send_new_rpc()) LBUG
Pid: 15468, comm: osp-syn-1-0 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] libcfs_call_trace+0x90/0xf0 [libcfs]
[<0>] lbug_with_loc+0x4c/0xa0 [libcfs]
[<0>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<0>] osp_sync_process_record+0x3e9/0xf20 [osp]
[<0>] osp_sync_process_queues+0x484/0xde0 [osp]
[<0>] llog_process_thread+0xc66/0x1c20 [obdclass]
[<0>] llog_process_or_fork+0xe5/0x580 [obdclass]
[<0>] llog_cat_process_cb+0x339/0x350 [obdclass]
[<0>] llog_process_thread+0xc66/0x1c20 [obdclass]
[<0>] llog_process_or_fork+0xe5/0x580 [obdclass]
[<0>] llog_cat_process_or_fork+0x119/0x460 [obdclass]
[<0>] llog_cat_process+0x2e/0x30 [obdclass]
[<0>] osp_sync_thread+0x188/0xc10 [osp]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 4 previous similar messages
Lustre: lustre-OST0000: deleting orphan objects from 0x240000400:68 to 0x240000400:161
LustreError: 137-5: lustre-OST0003_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: lustre-OST0001: deleting orphan objects from 0x280000400:3 to 0x280000400:97
Lustre: Mounted lustre-client
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
conf-sanity test 114: verify dynamic thread creation
LustreError: 30051:0:(osp_sync.c:631:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 30051:0:(osp_sync.c:631:osp_sync_send_new_rpc()) LBUG
Pid: 30051, comm: osp-syn-1-0 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] libcfs_call_trace+0x90/0xf0 [libcfs]
[<0>] lbug_with_loc+0x4c/0xa0 [libcfs]
[<0>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<0>] osp_sync_process_record+0x3e9/0xf20 [osp]
[<0>] osp_sync_process_queues+0x484/0xde0 [osp]
[<0>] llog_process_thread+0xc66/0x1c20 [obdclass]
[<0>] llog_process_or_fork+0xe5/0x580 [obdclass]
[<0>] llog_cat_process_cb+0x339/0x350 [obdclass]
[<0>] llog_process_thread+0xc66/0x1c20 [obdclass]
[<0>] llog_process_or_fork+0xe5/0x580 [obdclass]
[<0>] llog_cat_process_or_fork+0x119/0x460 [obdclass]
[<0>] llog_cat_process+0x2e/0x30 [obdclass]
[<0>] osp_sync_thread+0x188/0xc10 [osp]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 4 previous similar messages
LustreError: 137-5: lustre-OST0003_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: lustre-OST0000: deleting orphan objects from 0x240000400:36 to 0x240000400:129
Lustre: lustre-OST0001: deleting orphan objects from 0x280000400:3 to 0x280000400:97
Lustre: Mounted lustre-client
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
conf-sanity test 114: verify dynamic thread creation
LustreError: 6681:0:(osp_sync.c:631:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 6681:0:(osp_sync.c:631:osp_sync_send_new_rpc()) LBUG
Pid: 6681, comm: osp-syn-3-0 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] libcfs_call_trace+0x90/0xf0 [libcfs]
[<0>] lbug_with_loc+0x4c/0xa0 [libcfs]
[<0>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<0>] osp_sync_process_record+0x3e9/0xf20 [osp]
[<0>] osp_sync_process_queues+0x484/0xde0 [osp]
[<0>] llog_process_thread+0xc66/0x1c20 [obdclass]
[<0>] llog_process_or_fork+0xe5/0x580 [obdclass]
[<0>] llog_cat_process_cb+0x339/0x350 [obdclass]
[<0>] llog_process_thread+0xc66/0x1c20 [obdclass]
[<0>] llog_process_or_fork+0xe5/0x580 [obdclass]
[<0>] llog_cat_process_or_fork+0x119/0x460 [obdclass]
[<0>] llog_cat_process+0x2e/0x30 [obdclass]
[<0>] osp_sync_thread+0x188/0xc10 [osp]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
LustreError: 137-5: lustre-OST0002_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 2 previous similar messages
Lustre: lustre-OST0000: deleting orphan objects from 0x240000400:36 to 0x240000400:129
Lustre: lustre-OST0001: deleting orphan objects from 0x280000400:3 to 0x280000400:97
Lustre: Mounted lustre-client
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
conf-sanity test 114: verify dynamic thread creation
LustreError: 29425:0:(osp_sync.c:637:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 29425:0:(osp_sync.c:637:osp_sync_send_new_rpc()) LBUG
Pid: 29425, comm: osp-syn-0-0 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] libcfs_call_trace+0x90/0xf0 [libcfs]
[<0>] lbug_with_loc+0x4c/0xa0 [libcfs]
[<0>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<0>] osp_sync_process_record+0x3e9/0xf10 [osp]
[<0>] osp_sync_process_queues+0x4b4/0xdf0 [osp]
[<0>] llog_process_thread+0xc96/0x1c10 [obdclass]
[<0>] llog_process_or_fork+0xdc/0x570 [obdclass]
[<0>] llog_cat_process_cb+0x339/0x350 [obdclass]
[<0>] llog_process_thread+0xc96/0x1c10 [obdclass]
[<0>] llog_process_or_fork+0xdc/0x570 [obdclass]
[<0>] llog_cat_process_or_fork+0x119/0x480 [obdclass]
[<0>] llog_cat_process+0x2e/0x30 [obdclass]
[<0>] osp_sync_thread+0x1a4/0xc40 [osp]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 4 previous similar messages
LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 3 previous similar messages
Lustre: lustre-OST0000: deleting orphan objects from 0x240000400:36 to 0x240000400:129
Lustre: lustre-OST0001: deleting orphan objects from 0x280000400:3 to 0x280000400:97
Lustre: Mounted lustre-client
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
conf-sanity test 114: verify dynamic thread creation
LustreError: 30475:0:(osp_sync.c:637:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 30475:0:(osp_sync.c:637:osp_sync_send_new_rpc()) LBUG
Pid: 30475, comm: osp-syn-1-0 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] libcfs_call_trace+0x90/0xf0 [libcfs]
[<0>] lbug_with_loc+0x4c/0xa0 [libcfs]
[<0>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<0>] osp_sync_process_record+0x3e9/0xf10 [osp]
[<0>] osp_sync_process_queues+0x4b4/0xde0 [osp]
[<0>] llog_process_thread+0xc96/0x1c20 [obdclass]
[<0>] llog_process_or_fork+0xdc/0x570 [obdclass]
[<0>] llog_cat_process_cb+0x339/0x350 [obdclass]
[<0>] llog_process_thread+0xc96/0x1c20 [obdclass]
[<0>] llog_process_or_fork+0xdc/0x570 [obdclass]
[<0>] llog_cat_process_or_fork+0x119/0x480 [obdclass]
[<0>] llog_cat_process+0x2e/0x30 [obdclass]
[<0>] osp_sync_thread+0x1a4/0xc40 [osp]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 4 previous similar messages
LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 5 previous similar messages
Lustre: lustre-OST0000: deleting orphan objects from 0x240000400:36 to 0x240000400:129
Lustre: lustre-OST0001: deleting orphan objects from 0x280000400:3 to 0x280000400:97
Lustre: Mounted lustre-client
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
conf-sanity test 114: verify dynamic thread creation
LustreError: 30764:0:(osp_sync.c:643:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 30764:0:(osp_sync.c:643:osp_sync_send_new_rpc()) LBUG
Pid: 30764, comm: osp-syn-1-0 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] libcfs_call_trace+0x90/0xf0 [libcfs]
[<0>] lbug_with_loc+0x4c/0xa0 [libcfs]
[<0>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<0>] osp_sync_process_record+0x3e9/0xf10 [osp]
[<0>] osp_sync_process_queues+0x4b4/0xde0 [osp]
[<0>] llog_process_thread+0x99e/0x1b90 [obdclass]
[<0>] llog_process_or_fork+0xdc/0x570 [obdclass]
[<0>] llog_cat_process_cb+0x2d1/0x2e0 [obdclass]
[<0>] llog_process_thread+0x99e/0x1b90 [obdclass]
[<0>] llog_process_or_fork+0xdc/0x570 [obdclass]
[<0>] llog_cat_process_or_fork+0x211/0x3b0 [obdclass]
[<0>] llog_cat_process+0x2e/0x30 [obdclass]
[<0>] osp_sync_thread+0x1a4/0xc40 [osp]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 4 previous similar messages
LustreError: 137-5: lustre-OST0002_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: lustre-OST0001: deleting orphan objects from 0x0:3 to 0x0:97
LustreError: Skipped 7 previous similar messages
Lustre: lustre-OST0000: deleting orphan objects from 0x0:36 to 0x0:129
Lustre: lustre-OST0003: Not available for connect from 0@lo (not set up)
Lustre: Mounted lustre-client
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
sanity test 115: verify dynamic thread creation
LustreError: 29324:0:(osp_sync.c:644:osp_sync_send_new_rpc()) ASSERTION( atomic_read(&d->opd_sync_rpcs_in_flight) <= d->opd_sync_max_rpcs_in_flight ) failed:
LustreError: 29324:0:(osp_sync.c:644:osp_sync_send_new_rpc()) LBUG
Pid: 29324, comm: osp-syn-2-0 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] libcfs_call_trace+0x90/0xf0 [libcfs]
[<0>] lbug_with_loc+0x4c/0xa0 [libcfs]
[<0>] osp_sync_send_new_rpc+0xf4/0x100 [osp]
[<0>] osp_sync_process_record+0x3e9/0x1040 [osp]
[<0>] osp_sync_process_queues+0x4b4/0xde0 [osp]
[<0>] llog_process_thread+0x8be/0x1c50 [obdclass]
[<0>] llog_process_or_fork+0xdc/0x570 [obdclass]
[<0>] llog_cat_process_cb+0x2b9/0x2d0 [obdclass]
[<0>] llog_process_thread+0x8be/0x1c50 [obdclass]
[<0>] llog_process_or_fork+0xdc/0x570 [obdclass]
[<0>] llog_cat_process_or_fork+0x201/0x3a0 [obdclass]
[<0>] llog_cat_process+0x2e/0x30 [obdclass]
[<0>] osp_sync_thread+0x1a4/0xc40 [osp]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: Unmounted lustre-client
Lustre: Skipped 1 previous similar message
Lustre: lustre-MDT0000-lwp-OST0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
Lustre: lustre-MDT0000: Not available for connect from 0@lo (stopping)
LustreError: 27827:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items
Lustre: server umount lustre-MDT0000 complete
LustreError: 8241:0:(ldlm_lockd.c:2500:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1649784979 with bad export cookie 13488015208953853641
LustreError: 166-1: MGC192.168.123.50@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail
LustreError: 28039:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items
LustreError: 28039:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 51 previous similar messages
Lustre: server umount lustre-OST0000 complete
LustreError: 28208:0:(lprocfs_jobstats.c:137:job_stat_exit()) should not have any items
LustreError: 28208:0:(lprocfs_jobstats.c:137:job_stat_exit()) Skipped 11 previous similar messages
Lustre: server umount lustre-OST0002 complete
Lustre: Skipped 1 previous similar message
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 1 previous similar message
Lustre: lustre-OST0000: Imperative Recovery not enabled, recovery window 60-180
LustreError: 137-5: lustre-OST0001_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
Lustre: lustre-OST0000: deleting orphan objects from 0x0:15795 to 0x0:16561
LustreError: 137-5: lustre-OST0002_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 2 previous similar messages
Lustre: lustre-OST0001: Imperative Recovery not enabled, recovery window 60-180
Lustre: lustre-OST0001: deleting orphan objects from 0x0:38900 to 0x0:38945
LustreError: 137-5: lustre-OST0003_UUID: not available for connect from 0@lo (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: Skipped 1 previous similar message
Lustre: lustre-OST0002: Imperative Recovery not enabled, recovery window 60-180
Lustre: lustre-OST0002: deleting orphan objects from 0x0:37364 to 0x0:37473
Lustre: lustre-OST0003: deleting orphan objects from 0x0:37204 to 0x0:37249
Lustre: Mounted lustre-client
Lustre: Skipped 1 previous similar message
Lustre: DEBUG MARKER: Using TIMEOUT=20
Lustre: Modifying parameter general.lod.*.mdt_hash in log params
Link to test
Return to new crashes list