Editing crashreport #74472

ReasonCrashing FunctionWhere to cut BacktraceReports Count
BUG: kernel NULL pointer dereferenceqsd_op_endosd_trans_stop
nidtbl_update_version
mgs_nidtbl_write
mgs_ir_update
mgs_target_reg
tgt_handle_request0
tgt_request_handle
ptlrpc_server_handle_request
ptlrpc_main
kthread
ret_from_fork
3

Added fields:

Match messages in logs
(every line would be required to be present in log output
Copy from "Messages before crash" column below):
Match messages in full crash
(every line would be required to be present in crash log output
Copy from "Full Crash" column below):
Limit to a test:
(Copy from below "Failing text"):
Delete these reports as invalid (real bug in review or some such)
Bug or comment:
Extra info:

Failures list (last 100):

Failing TestFull CrashMessages before crashComment
recovery-small test 143: orphan cleanup thread shouldn't be blocked even delete failed
BUG: kernel NULL pointer dereference, address: 0000000000000060
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 12cc7c067 P4D 0
Oops: 0000 [#1] PREEMPT SMP NOPTI
CPU: 1 PID: 27652 Comm: ll_mgs_0003 Kdump: loaded Tainted: G OE ------- --- 5.14.0-503.40.1_lustre.el9.x86_64 #1
Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-2.el9_5.1 04/01/2014
RIP: 0010:qsd_op_end+0x91/0x330 [lquota]
Code: c7 05 f7 15 03 00 01 00 00 00 48 c7 05 f4 15 03 00 00 00 00 00 e8 ef b6 44 ff 48 85 db 0f 84 38 02 00 00 48 8b 83 80 00 00 00 <f6> 40 60 02 0f 84 8d 00 00 00 f6 05 06 6c 45 ff 01 0f 84 6e 01 00
RSP: 0018:ff48a22c066f3b48 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ff2baab2c2b6ac00 RCX: ffff0a00ffffff04
RDX: 0000000000000000 RSI: ff2baab303b90000 RDI: ff2baab2c290f880
RBP: ff2baab2ee179b78 R08: 000000000000000a R09: ff2baab403b8f88b
R10: ffffffffffffffff R11: 000000000000000f R12: ff2baab2c2160b00
R13: 0000000000000000 R14: ff2baab2c3ae0000 R15: ff2baab2c27c2258
FS: 0000000000000000(0000) GS:ff2baab33bb00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000060 CR3: 0000000136ca2005 CR4: 0000000000771ef0
PKRU: 55555554
Call Trace:
<TASK>
? srso_alias_return_thunk+0x5/0xfbef5
? show_trace_log_lvl+0x26e/0x2df
? show_trace_log_lvl+0x26e/0x2df
? osd_trans_stop+0x35e/0x850 [osd_ldiskfs]
? __die_body.cold+0x8/0xd
? page_fault_oops+0x134/0x170
? exc_page_fault+0x62/0x150
? asm_exc_page_fault+0x22/0x30
? qsd_op_end+0x91/0x330 [lquota]
? qsd_op_end+0x81/0x330 [lquota]
osd_trans_stop+0x35e/0x850 [osd_ldiskfs]
? srso_alias_return_thunk+0x5/0xfbef5
? osd_write+0x10d/0x4a0 [osd_ldiskfs]
nidtbl_update_version+0x21c/0x540 [mgs]
mgs_nidtbl_write+0x1aa/0x420 [mgs]
mgs_ir_update+0x7b/0x2a0 [mgs]
mgs_target_reg+0x92f/0x1b60 [mgs]
? srso_alias_return_thunk+0x5/0xfbef5
? req_capsule_server_pack+0x1f1/0x2c0 [ptlrpc]
? srso_alias_return_thunk+0x5/0xfbef5
tgt_handle_request0+0x147/0x770 [ptlrpc]
tgt_request_handle+0x3fd/0xd00 [ptlrpc]
ptlrpc_server_handle_request.isra.0+0x2e5/0xd80 [ptlrpc]
? srso_alias_return_thunk+0x5/0xfbef5
ptlrpc_main+0x9bf/0xea0 [ptlrpc]
? __pfx_ptlrpc_main+0x10/0x10 [ptlrpc]
kthread+0xdd/0x100
? __pfx_kthread+0x10/0x10
ret_from_fork+0x29/0x50
</TASK>
Modules linked in: osp(OE) mdd(OE) lod(OE) mdt(OE) lfsck(OE) mgs(OE) mgc(OE) osd_ldiskfs(OE) ldiskfs(OE) lquota(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) dm_flakey rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache netfs rfkill sunrpc intel_rapl_msr intel_rapl_common kvm_amd ccp kvm iTCO_wdt iTCO_vendor_support pcspkr virtio_balloon i2c_i801 lpc_ich i2c_smbus joydev drm fuse dm_mod ext4 mbcache jbd2 ahci libahci crct10dif_pclmul libata crc32_pclmul crc32c_intel ghash_clmulni_intel virtio_net net_failover failover virtio_blk serio_raw
CR2: 0000000000000060
Lustre: DEBUG MARKER: grep -c /mnt/lustre-mds1' ' /proc/mounts || true
Lustre: DEBUG MARKER: umount -d /mnt/lustre-mds1
Lustre: Failing over lustre-MDT0000
LustreError: 27649:0:(ldlm_lib.c:2985:target_stop_recovery_thread()) lustre-MDT0000: Aborting recovery
Lustre: 26405:0:(ldlm_lib.c:2388:target_recovery_overseer()) recovery is aborted, evict exports in recovery
Lustre: 26405:0:(ldlm_lib.c:2388:target_recovery_overseer()) Skipped 2 previous similar messages
LustreError: 26405:0:(ldlm_lib.c:1918:abort_lock_replay_queue()) @@@ aborted: req@ff2baab3034e9d40 x1856004388220672/t0(0) o101->lustre-MDT0003-mdtlov_UUID@10.240.25.91@tcp:741/0 lens 328/0 e 0 to 0 dl 1770029536 ref 1 fl Complete:/240/ffffffff rc 0/-1 job:'ldlm_lock_repla.0' uid:0 gid:0 projid:4294967295
Lustre: lustre-MDT0000-osd: cancel update llog [0x200009870:0x1:0x0]
Lustre: lustre-MDT0001-osp-MDT0000: cancel update llog [0x240000409:0x1:0x0]
LustreError: 26405:0:(client.c:1379:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ff2baab2edc023c0 x1856010218464640/t0(0) o700->lustre-MDT0001-osp-MDT0000@10.240.25.91@tcp:30/10 lens 264/248 e 0 to 0 dl 0 ref 2 fl Rpc:QU/200/ffffffff rc 0/-1 job:'tgt_recover_0.0' uid:0 gid:0 projid:4294967295
LustreError: 26405:0:(fid_request.c:212:seq_client_alloc_seq()) cli-cli-lustre-MDT0001-osp-MDT0000: Cannot allocate new meta-sequence: rc = -5
LustreError: 26405:0:(fid_request.c:315:seq_client_alloc_fid()) cli-cli-lustre-MDT0001-osp-MDT0000: Can't allocate new sequence: rc = -5
Lustre: lustre-MDT0002-osp-MDT0000: cancel update llog [0x280000409:0x1:0x0]
Lustre: lustre-MDT0003-osp-MDT0000: cancel update llog [0x2c0000409:0x1:0x0]
Lustre: lustre-MDT0000: Recovery over after 0:03, of 5 clients 0 recovered and 5 were evicted.
Lustre: lustre-MDT0000: Not available for connect from 10.240.25.91@tcp (stopping)
Lustre: Skipped 11 previous similar messages
Link to test
recovery-small test 111: mdd setup fail should not cause umount oops
BUG: kernel NULL pointer dereference, address: 0000000000000060
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 136a47067 P4D 103547067 PUD 10a74a067 PMD 0
Oops: 0000 [#1] PREEMPT SMP NOPTI
CPU: 0 PID: 132130 Comm: ll_mgs_0003 Kdump: loaded Tainted: G OE ------- --- 5.14.0-503.40.1_lustre.el9.x86_64 #1
Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-2.el9_5.1 04/01/2014
RIP: 0010:qsd_op_end+0x91/0x330 [lquota]
Code: c7 05 57 15 03 00 01 00 00 00 48 c7 05 54 15 03 00 00 00 00 00 e8 0f d7 42 ff 48 85 db 0f 84 38 02 00 00 48 8b 83 80 00 00 00 <f6> 40 60 02 0f 84 8d 00 00 00 f6 05 06 8c 43 ff 01 0f 84 6e 01 00
RSP: 0018:ff727c5143723b48 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ff3b2a6caf4b3200 RCX: ffff0a00ffffff04
RDX: 0000000000000000 RSI: ff3b2a6cb3b89000 RDI: ff3b2a6c82b36900
RBP: ff3b2a6cb3afcb78 R08: 000000000000000a R09: ff3b2a6db3b8872c
R10: ffffffffffffffff R11: 000000000000000f R12: ff3b2a6cb253b9c0
R13: 0000000000000000 R14: ff3b2a6cb2688000 R15: ff3b2a6cb555ae58
FS: 0000000000000000(0000) GS:ff3b2a6cfba00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000060 CR3: 0000000132630002 CR4: 0000000000771ef0
PKRU: 55555554
Call Trace:
<TASK>
? srso_alias_return_thunk+0x5/0xfbef5
? show_trace_log_lvl+0x26e/0x2df
? show_trace_log_lvl+0x26e/0x2df
? osd_trans_stop+0x35e/0x850 [osd_ldiskfs]
? __die_body.cold+0x8/0xd
? page_fault_oops+0x134/0x170
? exc_page_fault+0x62/0x150
? asm_exc_page_fault+0x22/0x30
? qsd_op_end+0x91/0x330 [lquota]
? qsd_op_end+0x81/0x330 [lquota]
osd_trans_stop+0x35e/0x850 [osd_ldiskfs]
? srso_alias_return_thunk+0x5/0xfbef5
? srso_alias_return_thunk+0x5/0xfbef5
? osd_write+0x10d/0x4a0 [osd_ldiskfs]
nidtbl_update_version+0x21c/0x540 [mgs]
mgs_nidtbl_write+0x1aa/0x420 [mgs]
mgs_ir_update+0x7b/0x2a0 [mgs]
mgs_target_reg+0x92f/0x1b60 [mgs]
? srso_alias_return_thunk+0x5/0xfbef5
? req_capsule_server_pack+0x1f1/0x2c0 [ptlrpc]
? srso_alias_return_thunk+0x5/0xfbef5
tgt_handle_request0+0x147/0x770 [ptlrpc]
tgt_request_handle+0x3fd/0xd00 [ptlrpc]
ptlrpc_server_handle_request.isra.0+0x2e5/0xd80 [ptlrpc]
? srso_alias_return_thunk+0x5/0xfbef5
ptlrpc_main+0x9bf/0xea0 [ptlrpc]
? __pfx_ptlrpc_main+0x10/0x10 [ptlrpc]
kthread+0xdd/0x100
? __pfx_kthread+0x10/0x10
ret_from_fork+0x29/0x50
</TASK>
Modules linked in: tls osp(OE) mdd(OE) lod(OE) mdt(OE) lfsck(OE) mgs(OE) mgc(OE) osd_ldiskfs(OE) ldiskfs(OE) lquota(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) dm_flakey dm_mod rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache netfs rfkill sunrpc intel_rapl_msr intel_rapl_common kvm_amd ccp kvm i2c_i801 iTCO_wdt iTCO_vendor_support i2c_smbus pcspkr virtio_balloon lpc_ich joydev fuse drm ext4 mbcache jbd2 ahci crct10dif_pclmul libahci crc32_pclmul crc32c_intel libata virtio_blk ghash_clmulni_intel virtio_net net_failover failover serio_raw
CR2: 0000000000000060
Lustre: DEBUG MARKER: lctl set_param fail_loc=0x151
Lustre: DEBUG MARKER: grep -c /mnt/lustre-mds1' ' /proc/mounts || true
Lustre: DEBUG MARKER: umount -d /mnt/lustre-mds1
Lustre: Failing over lustre-MDT0000
LustreError: 131402:0:(obd_class.h:478:obd_check_dev()) Device 34 not setup
LustreError: 131402:0:(obd_class.h:478:obd_check_dev()) Skipped 71 previous similar messages
LDISKFS-fs (dm-3): unmounting filesystem 17a686cc-cf18-435c-8226-e66357eaa689.
Lustre: server umount lustre-MDT0000 complete
Lustre: DEBUG MARKER: lsmod | grep lnet > /dev/null &&
Lustre: DEBUG MARKER: modprobe dm-flakey;
LustreError: 9695:0:(ldlm_lib.c:1176:target_handle_connect()) lustre-MDT0000: not available for connect from 10.240.42.54@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: 9695:0:(ldlm_lib.c:1176:target_handle_connect()) Skipped 100 previous similar messages
Lustre: DEBUG MARKER: mkdir -p /mnt/lustre-mds1
Lustre: DEBUG MARKER: modprobe dm-flakey;
Lustre: DEBUG MARKER: dmsetup status /dev/mapper/mds1_flakey >/dev/null 2>&1
Lustre: DEBUG MARKER: dmsetup status /dev/mapper/mds1_flakey 2>&1
Lustre: DEBUG MARKER: test -b /dev/mapper/mds1_flakey
Lustre: DEBUG MARKER: e2label /dev/mapper/mds1_flakey
Lustre: DEBUG MARKER: mkdir -p /mnt/lustre-mds1; mount -t lustre -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1
LDISKFS-fs (dm-3): mounted filesystem 17a686cc-cf18-435c-8226-e66357eaa689 r/w with ordered data mode. Quota mode: journalled.
LustreError: MGC10.240.42.56@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail
LustreError: Skipped 2 previous similar messages
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 2 previous similar messages
Lustre: *** cfs_fail_loc=151, val=0***
LustreError: 132083:0:(mdd_device.c:673:mdd_changelog_init()) lustre-MDD0000: changelog setup during init failed: rc = -5
LustreError: 132083:0:(mdd_device.c:1405:mdd_prepare()) lustre-MDD0000: failed to initialize changelog: rc = -5
LustreError: 132083:0:(tgt_mount.c:2576:server_fill_super()) Unable to start targets: -5
Lustre: Failing over lustre-MDT0000
LustreError: 132127:0:(llog_osd.c:223:llog_osd_read_header()) lustre-MDT0001-osp-MDT0000: can't read llog [0x24000040a:0x1:0x0] header: rc = -5
LustreError: 132127:0:(lod_dev.c:508:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 0, retries 0, failed: rc = -5
Lustre: lustre-MDT0000: Not available for connect from 10.240.42.57@tcp (stopping)
Link to test
recovery-small test 111: mdd setup fail should not cause umount oops
BUG: kernel NULL pointer dereference, address: 0000000000000060
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 12e394067 P4D 102ffd067 PUD 12dedd067 PMD 0
Oops: 0000 [#1] PREEMPT SMP NOPTI
CPU: 0 PID: 132094 Comm: ll_mgs_0002 Kdump: loaded Tainted: G OE ------- --- 5.14.0-503.40.1_lustre.el9.x86_64 #1
Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-2.el9_5.1 04/01/2014
RIP: 0010:qsd_op_end+0x91/0x330 [lquota]
Code: c7 05 57 15 03 00 01 00 00 00 48 c7 05 54 15 03 00 00 00 00 00 e8 0f 27 41 ff 48 85 db 0f 84 38 02 00 00 48 8b 83 80 00 00 00 <f6> 40 60 02 0f 84 8d 00 00 00 f6 05 06 dc 41 ff 01 0f 84 6e 01 00
RSP: 0018:ff23faa00154fb48 EFLAGS: 00010286
RAX: 0000000000000000 RBX: ff228a7e0504ee00 RCX: ffff0a00ffffff04
RDX: 0000000000000000 RSI: ff228a7e2de4b000 RDI: ff228a7e03d49b00
RBP: ff228a7e32374b78 R08: 000000000000000a R09: ff228a7f2de4a588
R10: ffffffffffffffff R11: 000000000000000f R12: ff228a7e06db00c0
R13: 0000000000000000 R14: ff228a7e26f38000 R15: ff228a7e2b430258
FS: 0000000000000000(0000) GS:ff228a7e7ba00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000060 CR3: 000000012d732006 CR4: 0000000000771ef0
PKRU: 55555554
Call Trace:
<TASK>
? srso_alias_return_thunk+0x5/0xfbef5
? show_trace_log_lvl+0x26e/0x2df
? show_trace_log_lvl+0x26e/0x2df
? osd_trans_stop+0x35e/0x850 [osd_ldiskfs]
? __die_body.cold+0x8/0xd
? page_fault_oops+0x134/0x170
? exc_page_fault+0x62/0x150
? asm_exc_page_fault+0x22/0x30
? qsd_op_end+0x91/0x330 [lquota]
? qsd_op_end+0x81/0x330 [lquota]
osd_trans_stop+0x35e/0x850 [osd_ldiskfs]
? srso_alias_return_thunk+0x5/0xfbef5
? srso_alias_return_thunk+0x5/0xfbef5
? osd_write+0x10d/0x4a0 [osd_ldiskfs]
nidtbl_update_version+0x21c/0x540 [mgs]
mgs_nidtbl_write+0x1aa/0x420 [mgs]
mgs_ir_update+0x7b/0x2a0 [mgs]
mgs_target_reg+0x92f/0x1b60 [mgs]
? srso_alias_return_thunk+0x5/0xfbef5
? req_capsule_server_pack+0x1f1/0x2c0 [ptlrpc]
? srso_alias_return_thunk+0x5/0xfbef5
tgt_handle_request0+0x147/0x770 [ptlrpc]
tgt_request_handle+0x3fd/0xd00 [ptlrpc]
ptlrpc_server_handle_request.isra.0+0x2e5/0xd80 [ptlrpc]
? srso_alias_return_thunk+0x5/0xfbef5
ptlrpc_main+0x9bf/0xea0 [ptlrpc]
? __pfx_ptlrpc_main+0x10/0x10 [ptlrpc]
kthread+0xdd/0x100
? __pfx_kthread+0x10/0x10
ret_from_fork+0x29/0x50
</TASK>
Modules linked in: tls osp(OE) mdd(OE) lod(OE) mdt(OE) lfsck(OE) mgs(OE) mgc(OE) osd_ldiskfs(OE) ldiskfs(OE) lquota(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) dm_flakey dm_mod rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache netfs intel_rapl_msr intel_rapl_common kvm_amd ccp kvm rfkill iTCO_wdt iTCO_vendor_support i2c_i801 pcspkr i2c_smbus virtio_balloon lpc_ich joydev sunrpc drm fuse ext4 mbcache jbd2 ahci libahci libata crct10dif_pclmul crc32_pclmul crc32c_intel virtio_net net_failover failover ghash_clmulni_intel virtio_blk serio_raw
CR2: 0000000000000060
Lustre: DEBUG MARKER: lctl set_param fail_loc=0x151
Lustre: DEBUG MARKER: grep -c /mnt/lustre-mds1' ' /proc/mounts || true
Lustre: DEBUG MARKER: umount -d /mnt/lustre-mds1
Lustre: Failing over lustre-MDT0000
LustreError: 131406:0:(obd_class.h:478:obd_check_dev()) Device 33 not setup
LustreError: 131406:0:(obd_class.h:478:obd_check_dev()) Skipped 71 previous similar messages
LDISKFS-fs (dm-3): unmounting filesystem ac399ccb-18ca-4651-bd12-593f0fccc0d1.
Lustre: server umount lustre-MDT0000 complete
Lustre: DEBUG MARKER: lsmod | grep lnet > /dev/null &&
Lustre: DEBUG MARKER: modprobe dm-flakey;
LustreError: 6486:0:(ldlm_lib.c:1176:target_handle_connect()) lustre-MDT0000: not available for connect from 10.240.39.122@tcp (no target). If you are running an HA pair check that the target is mounted on the other server.
LustreError: 6486:0:(ldlm_lib.c:1176:target_handle_connect()) Skipped 99 previous similar messages
Lustre: DEBUG MARKER: mkdir -p /mnt/lustre-mds1
Lustre: DEBUG MARKER: modprobe dm-flakey;
Lustre: DEBUG MARKER: dmsetup status /dev/mapper/mds1_flakey >/dev/null 2>&1
Lustre: DEBUG MARKER: dmsetup status /dev/mapper/mds1_flakey 2>&1
Lustre: DEBUG MARKER: test -b /dev/mapper/mds1_flakey
Lustre: DEBUG MARKER: e2label /dev/mapper/mds1_flakey
Lustre: DEBUG MARKER: mkdir -p /mnt/lustre-mds1; mount -t lustre -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1
LDISKFS-fs (dm-3): mounted filesystem ac399ccb-18ca-4651-bd12-593f0fccc0d1 r/w with ordered data mode. Quota mode: journalled.
LustreError: MGC10.240.39.124@tcp: Connection to MGS (at 0@lo) was lost; in progress operations using this service will fail
LustreError: Skipped 2 previous similar messages
Lustre: lustre-MDT0000: Imperative Recovery not enabled, recovery window 60-180
Lustre: Skipped 2 previous similar messages
Lustre: *** cfs_fail_loc=151, val=0***
LustreError: 132087:0:(mdd_device.c:673:mdd_changelog_init()) lustre-MDD0000: changelog setup during init failed: rc = -5
LustreError: 132087:0:(mdd_device.c:1405:mdd_prepare()) lustre-MDD0000: failed to initialize changelog: rc = -5
LustreError: 132087:0:(tgt_mount.c:2576:server_fill_super()) Unable to start targets: -5
Lustre: Failing over lustre-MDT0000
LustreError: 132131:0:(llog_osd.c:223:llog_osd_read_header()) lustre-MDT0001-osp-MDT0000: can't read llog [0x24000040c:0x1:0x0] header: rc = -5
Lustre: 132131:0:(llog_cat.c:839:llog_cat_process_common()) lustre-MDT0001-osp-MDT0000: can't find llog handle [0x24000040c:0x1:0x0]: rc = -5
LustreError: 132131:0:(llog.c:866:llog_process_thread()) lustre-MDT0001-osp-MDT0000 retry remote llog process
LustreError: 132131:0:(lod_dev.c:508:lod_sub_recovery_thread()) lustre-MDT0001-osp-MDT0000: get update log duration 0, retries 0, failed: rc = -11
Lustre: lustre-MDT0000: Not available for connect from 10.240.39.125@tcp (stopping)
Lustre: Skipped 4 previous similar messages
Link to test
Return to new crashes list