Editing crashreport #70605

ReasonCrashing FunctionWhere to cut BacktraceReports Count
ASSERTION( atomic_read(&d->ld_ref) == 0 ) failedmdt_finimdt_device_fini
obd_precleanup
class_cleanup
class_process_config
class_manual_cleanup
server_put_super
generic_shutdown_super
kill_anon_super
deactivate_locked_super
cleanup_mnt
task_work_run
exit_to_usermode_loop
do_syscall_64
entry_SYSCALL_64_after_hwframe
2

Added fields:

Match messages in logs
(every line would be required to be present in log output
Copy from "Messages before crash" column below):
Match messages in full crash
(every line would be required to be present in crash log output
Copy from "Full Crash" column below):
Limit to a test:
(Copy from below "Failing text"):
Delete these reports as invalid (real bug in review or some such)
Bug or comment:
Extra info:

Failures list (last 100):

Failing TestFull CrashMessages before crashComment
recovery-small test 110k: FID_QUERY failed during recovery
LustreError: 83421:0:(mdt_handler.c:6462:mdt_fini()) ASSERTION( atomic_read(&d->ld_ref) == 0 ) failed:
LustreError: 83421:0:(mdt_handler.c:6462:mdt_fini()) LBUG
CPU: 1 PID: 83421 Comm: umount Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.53.1.el8_lustre.x86_64 #1
Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
Call Trace:
dump_stack+0x41/0x60
lbug_with_loc.cold.8+0x5/0x43 [libcfs]
mdt_device_fini+0xe2f/0xef0 [mdt]
obd_precleanup+0xdc/0x280 [obdclass]
? class_disconnect_exports+0x187/0x2f0 [obdclass]
class_cleanup+0x322/0x7e0 [obdclass]
class_process_config+0x3bb/0x20f0 [obdclass]
class_manual_cleanup+0x45b/0x780 [obdclass]
server_put_super+0xd65/0x1440 [ptlrpc]
? fsnotify_sb_delete+0x138/0x1c0
generic_shutdown_super+0x6c/0x110
kill_anon_super+0x14/0x30
deactivate_locked_super+0x34/0x70
cleanup_mnt+0x3b/0x70
task_work_run+0x8a/0xb0
exit_to_usermode_loop+0xef/0x100
do_syscall_64+0x195/0x1a0
entry_SYSCALL_64_after_hwframe+0x66/0xcb
RIP: 0033:0x7f923267d8fb
Lustre: DEBUG MARKER: grep -c /mnt/lustre-mds2' ' /proc/mounts || true
Lustre: DEBUG MARKER: umount -d /mnt/lustre-mds2
Lustre: Failing over lustre-MDT0001
LustreError: 83421:0:(obd_class.h:479:obd_check_dev()) Device 33 not setup
Link to test
sanity-quota test 81: Race qmt_start_pool_recalc with qmt_pool_free
LustreError: 337417:0:(mdt_handler.c:6218:mdt_fini()) ASSERTION( atomic_read(&d->ld_ref) == 0 ) failed:
LustreError: 361:0:(lu_object.c:998:lu_site_print()) header@00000000f2d0c2f3[0x4, 1, [0xa:0x0:0x0] hash exist]{
LustreError: 337417:0:(mdt_handler.c:6218:mdt_fini()) LBUG
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....local_storage@00000000ab86b486
CPU: 1 PID: 337417 Comm: umount Kdump: loaded Tainted: P OE --------- - - 4.18.0-513.18.1.el8_lustre.x86_64 #1
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....osd-zfs@00000000d5ea7783osd-zfs-object@00000000d5ea7783
Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
Call Trace:
dump_stack+0x41/0x60
LustreError: 361:0:(lu_object.c:998:lu_site_print()) } header@00000000f2d0c2f3
lbug_with_loc.cold.8+0x5/0x43 [libcfs]
LustreError: 361:0:(lu_object.c:998:lu_site_print()) header@00000000a7cdf336[0x4, 1, [0x1:0x0:0x0] hash exist]{
mdt_device_fini+0xdb5/0xf60 [mdt]
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....local_storage@000000005ead6555
? lu_context_init+0xa5/0x1b0 [obdclass]
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....osd-zfs@00000000e60d1e67osd-zfs-object@00000000e60d1e67
obd_precleanup+0x1e5/0x220 [obdclass]
LustreError: 361:0:(lu_object.c:998:lu_site_print()) } header@00000000a7cdf336
class_cleanup+0x31e/0x900 [obdclass]
LustreError: 361:0:(lu_object.c:998:lu_site_print()) header@00000000dd3a08d2[0x4, 1, [0x200000003:0x4:0x0] hash exist]{
class_process_config+0x3ad/0x21f0 [obdclass]
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....local_storage@00000000ec88598d
? class_manual_cleanup+0x191/0x780 [obdclass]
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....osd-zfs@000000005fc93d77osd-zfs-object@000000005fc93d77
? __kmalloc+0x113/0x250
LustreError: 361:0:(lu_object.c:998:lu_site_print()) } header@00000000dd3a08d2
? lprocfs_counter_add+0x12a/0x1a0 [obdclass]
LustreError: 361:0:(lu_object.c:998:lu_site_print()) header@0000000039ffb883[0x5, 1, [0x200000003:0x2:0x0] hash exist]{
class_manual_cleanup+0x456/0x780 [obdclass]
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....local_storage@0000000084ed4caa
server_put_super+0x7b6/0x1310 [ptlrpc]
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....osd-zfs@0000000056420d07osd-zfs-object@0000000056420d07
? fsnotify_unmount_inodes+0x11c/0x1b0
LustreError: 361:0:(lu_object.c:998:lu_site_print()) } header@0000000039ffb883
? evict_inodes+0x160/0x1b0
LustreError: 361:0:(lu_object.c:998:lu_site_print()) header@000000009f83d721[0x4, 1, [0xffffffff:0x1:0x0] hash exist]{
generic_shutdown_super+0x6c/0x110
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....osd-zfs@000000001dd8b05fosd-zfs-object@000000001dd8b05f
kill_anon_super+0x14/0x30
LustreError: 361:0:(lu_object.c:998:lu_site_print()) } header@000000009f83d721
deactivate_locked_super+0x34/0x70
cleanup_mnt+0x3b/0x70
LustreError: 361:0:(lu_object.c:998:lu_site_print()) header@000000005446b233[0x4, 1, [0x200000006:0x2020000:0x0] hash exist]{
task_work_run+0x8a/0xb0
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....mdt@000000008b4ffe1cmdt-object@000000005446b233( , writecount=0)
exit_to_usermode_loop+0xef/0x100
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....mdd@00000000a7ebac50mdd-object@00000000a7ebac50(open_count=0, valid=0, cltime=0ns, flags=0)
do_syscall_64+0x19c/0x1b0
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....lod@00000000155b2edclod-object@00000000155b2edc
entry_SYSCALL_64_after_hwframe+0x61/0xc6
LustreError: 361:0:(lu_object.c:998:lu_site_print()) ....osd-zfs@0000000030785ff6osd-zfs-object@0000000030785ff6
RIP: 0033:0x7f0966877e9b
LustreError: 361:0:(lu_object.c:998:lu_site_print()) } header@000000005446b233
Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n osc.*MDT*.sync_*
Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n osp.*.destroys_in_flight
Lustre: DEBUG MARKER: lctl set_param fail_val=0 fail_loc=0
Lustre: DEBUG MARKER: /usr/sbin/lctl conf_param lustre.quota.ost=ugp
Lustre: DEBUG MARKER: /usr/sbin/lctl mark User quota \(block hardlimit:20 MB\)
Lustre: DEBUG MARKER: User quota (block hardlimit:20 MB)
Lustre: DEBUG MARKER: lctl pool_new lustre.qpool1
Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.qpool1 2>/dev/null || echo foo
Lustre: DEBUG MARKER: /usr/sbin/lctl set_param fail_loc=0x80000A07 fail_val=10
Lustre: DEBUG MARKER: /usr/sbin/lctl pool_add lustre.qpool1 lustre-OST[0000-0000/1]
LustreError: 337124:0:(qmt_pool.c:1302:qmt_pool_recalc()) cfs_fail_timeout id a07 sleeping for 10000ms
Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.qpool1 |
Lustre: DEBUG MARKER: grep -c /mnt/lustre-mds1' ' /proc/mounts || true
Lustre: DEBUG MARKER: umount -d -f /mnt/lustre-mds1
Lustre: lustre-MDT0000: Not available for connect from 10.240.42.238@tcp (stopping)
Lustre: Skipped 1 previous similar message
Lustre: lustre-MDT0000: Not available for connect from 10.240.42.238@tcp (stopping)
Lustre: Skipped 3 previous similar messages
LustreError: 337124:0:(qmt_pool.c:1302:qmt_pool_recalc()) cfs_fail_timeout id a07 awake
Link to test
Return to new crashes list