| Match messages in logs (every line would be required to be present in log output Copy from "Messages before crash" column below): | |
| Match messages in full crash (every line would be required to be present in crash log output Copy from "Full Crash" column below): | |
| Limit to a test: (Copy from below "Failing text"): | |
| Delete these reports as invalid (real bug in review or some such) | |
| Bug or comment: | |
| Extra info: |
| Failing Test | Full Crash | Messages before crash | Comment |
|---|---|---|---|
| large-scale test 3a: recovery time, 2 clients | LustreError: 459509:0:(lquota_entry.c:118:lqe_iter_cb()) ASSERTION( lqe->u.se.lse_pending_write == 0 ) failed: LustreError: 459509:0:(lquota_entry.c:118:lqe_iter_cb()) LBUG CPU: 0 PID: 459509 Comm: umount Kdump: loaded Tainted: G OE ------- --- 5.14.0-570.62.1_lustre.el9.x86_64 #1 Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-4.el9 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x34/0x48 ? __pfx_lqe_iter_cb+0x10/0x10 [lquota] lbug_with_loc.cold+0x5/0x43 [libcfs] lqe_iter_cb+0x134/0x140 [lquota] cfs_hash_for_each_tight+0xeb/0x2d0 [obdclass] lquota_site_free+0xf1/0x2b0 [lquota] qsd_qtype_fini+0x8b/0x400 [lquota] qsd_fini+0x1fb/0x430 [lquota] osd_shutdown+0x43/0x110 [osd_ldiskfs] osd_process_config+0x21d/0x3c0 [osd_ldiskfs] lod_process_config+0x40f/0xfd0 [lod] mdd_process_config+0xaf/0x450 [mdd] mdt_stack_fini+0x302/0x640 [mdt] mdt_fini+0x305/0x580 [mdt] mdt_device_fini+0x2b/0xc0 [mdt] obd_precleanup.isra.0+0x8e/0x280 [obdclass] ? class_disconnect_exports+0x193/0x300 [obdclass] class_cleanup+0x2db/0x600 [obdclass] class_process_config+0x12ef/0x1e00 [obdclass] class_manual_cleanup+0x1e5/0x6f0 [obdclass] server_put_super+0xa05/0xbd0 [ptlrpc] generic_shutdown_super+0x7c/0x100 kill_anon_super+0x12/0x40 deactivate_locked_super+0x31/0xb0 cleanup_mnt+0x100/0x160 task_work_run+0x5c/0x90 exit_to_user_mode_loop+0x12b/0x130 exit_to_user_mode_prepare+0x6c/0x80 syscall_exit_to_user_mode+0x12/0x40 do_syscall_64+0x6b/0xe0 ? rcutree_enqueue+0x23/0x140 ? __pfx_file_free_rcu+0x10/0x10 ? __call_rcu_common.constprop.0+0xa7/0x2e0 ? syscall_exit_work+0x103/0x130 ? syscall_exit_to_user_mode+0x19/0x40 ? do_syscall_64+0x6b/0xe0 ? syscall_exit_work+0x103/0x130 ? syscall_exit_to_user_mode+0x19/0x40 ? do_syscall_64+0x6b/0xe0 ? _raw_spin_unlock_irq+0xa/0x30 ? sigprocmask+0xb4/0xe0 ? syscall_exit_work+0x103/0x130 ? syscall_exit_to_user_mode+0x19/0x40 ? do_syscall_64+0x6b/0xe0 ? syscall_exit_work+0x103/0x130 ? syscall_exit_to_user_mode+0x19/0x40 ? do_syscall_64+0x6b/0xe0 ? syscall_exit_work+0x103/0x130 ? syscall_exit_work+0x103/0x130 ? syscall_exit_to_user_mode+0x19/0x40 ? do_syscall_64+0x6b/0xe0 ? exc_page_fault+0x62/0x150 entry_SYSCALL_64_after_hwframe+0x78/0x80 RIP: 0033:0x7f4ab190e39b | Lustre: DEBUG MARKER: /usr/sbin/lctl mark 1 : Starting failover on mds1 Lustre: DEBUG MARKER: 1 : Starting failover on mds1 Lustre: DEBUG MARKER: grep -c /mnt/lustre-mds1' ' /proc/mounts || true Lustre: DEBUG MARKER: umount -d /mnt/lustre-mds1 Lustre: Failing over lustre-MDT0000 | Link to test |
| large-scale test 3a: recovery time, 2 clients | LustreError: 426776:0:(lquota_entry.c:118:lqe_iter_cb()) ASSERTION( lqe->u.se.lse_pending_write == 0 ) failed: LustreError: 426776:0:(lquota_entry.c:118:lqe_iter_cb()) LBUG CPU: 0 PID: 426776 Comm: umount Kdump: loaded Tainted: G OE ------- --- 5.14.0-427.42.1_lustre.el9.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: <TASK> dump_stack_lvl+0x34/0x48 ? __pfx_lqe_iter_cb+0x10/0x10 [lquota] lbug_with_loc.cold+0x5/0x43 [libcfs] lqe_iter_cb+0x134/0x140 [lquota] cfs_hash_for_each_tight+0xeb/0x2d0 [obdclass] lquota_site_free+0xf1/0x2c0 [lquota] qsd_qtype_fini+0x8b/0x420 [lquota] qsd_fini+0x1fb/0x460 [lquota] osd_shutdown+0x43/0x110 [osd_ldiskfs] osd_process_config+0x21d/0x3c0 [osd_ldiskfs] lod_process_config+0x40f/0x1000 [lod] mdd_process_config+0xaf/0x450 [mdd] mdt_stack_fini+0x498/0x700 [mdt] mdt_fini+0x305/0x580 [mdt] mdt_device_fini+0x2b/0xc0 [mdt] obd_precleanup+0xdc/0x280 [obdclass] ? class_disconnect_exports+0x193/0x300 [obdclass] class_cleanup+0x2d7/0x600 [obdclass] class_process_config+0x1102/0x1ab0 [obdclass] class_manual_cleanup+0x43b/0x7a0 [obdclass] server_put_super+0x98f/0xb40 [ptlrpc] generic_shutdown_super+0x74/0x120 kill_anon_super+0x14/0x30 deactivate_locked_super+0x31/0xa0 cleanup_mnt+0x100/0x160 task_work_run+0x5c/0x90 exit_to_user_mode_loop+0x122/0x130 exit_to_user_mode_prepare+0xb6/0x100 syscall_exit_to_user_mode+0x12/0x40 do_syscall_64+0x69/0x90 ? syscall_exit_work+0x103/0x130 ? syscall_exit_to_user_mode+0x19/0x40 ? do_syscall_64+0x69/0x90 ? syscall_exit_to_user_mode+0x19/0x40 ? do_syscall_64+0x69/0x90 ? do_syscall_64+0x69/0x90 ? do_syscall_64+0x69/0x90 ? do_syscall_64+0x69/0x90 ? do_syscall_64+0x69/0x90 ? exc_page_fault+0x62/0x150 entry_SYSCALL_64_after_hwframe+0x77/0xe1 RIP: 0033:0x7f965a50df0b | Autotest: Test running for 140 minutes (lustre-master-next_full-part-1_894.88) Lustre: DEBUG MARKER: /usr/sbin/lctl mark 1 : Starting failover on mds1 Lustre: DEBUG MARKER: 1 : Starting failover on mds1 Lustre: DEBUG MARKER: grep -c /mnt/lustre-mds1' ' /proc/mounts || true Lustre: DEBUG MARKER: umount -d /mnt/lustre-mds1 Lustre: Failing over lustre-MDT0000 | Link to test |
| large-scale test 3a: recovery time, 2 clients | LustreError: 426710:0:(lquota_entry.c:118:lqe_iter_cb()) ASSERTION( lqe->u.se.lse_pending_write == 0 ) failed: LustreError: 426710:0:(lquota_entry.c:118:lqe_iter_cb()) LBUG CPU: 1 PID: 426710 Comm: umount Kdump: loaded Tainted: G OE ------- --- 5.14.0-427.42.1_lustre.el9.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: <TASK> dump_stack_lvl+0x34/0x48 ? __pfx_lqe_iter_cb+0x10/0x10 [lquota] lbug_with_loc.cold+0x5/0x43 [libcfs] lqe_iter_cb+0x134/0x140 [lquota] cfs_hash_for_each_tight+0xeb/0x2d0 [obdclass] lquota_site_free+0xf1/0x2c0 [lquota] qsd_qtype_fini+0x8b/0x420 [lquota] qsd_fini+0x1fb/0x460 [lquota] osd_shutdown+0x43/0x110 [osd_ldiskfs] osd_process_config+0x21d/0x3c0 [osd_ldiskfs] lod_process_config+0x40f/0xfc0 [lod] mdd_process_config+0xaf/0x450 [mdd] mdt_stack_fini+0x498/0x700 [mdt] mdt_fini+0x305/0x580 [mdt] mdt_device_fini+0x2b/0xc0 [mdt] obd_precleanup+0xf3/0x220 [obdclass] ? class_disconnect_exports+0x193/0x300 [obdclass] class_cleanup+0x2d7/0x600 [obdclass] class_process_config+0x1102/0x1ab0 [obdclass] class_manual_cleanup+0x43b/0x7a0 [obdclass] server_put_super+0x98f/0xb40 [ptlrpc] generic_shutdown_super+0x74/0x120 kill_anon_super+0x14/0x30 deactivate_locked_super+0x31/0xa0 cleanup_mnt+0x100/0x160 task_work_run+0x5c/0x90 exit_to_user_mode_loop+0x122/0x130 exit_to_user_mode_prepare+0xb6/0x100 syscall_exit_to_user_mode+0x12/0x40 do_syscall_64+0x69/0x90 ? do_syscall_64+0x69/0x90 ? syscall_exit_to_user_mode+0x19/0x40 ? do_syscall_64+0x69/0x90 ? do_syscall_64+0x69/0x90 ? syscall_exit_work+0x103/0x130 ? syscall_exit_to_user_mode+0x19/0x40 ? do_syscall_64+0x69/0x90 entry_SYSCALL_64_after_hwframe+0x77/0xe1 RIP: 0033:0x7ff242b0df0b | Lustre: DEBUG MARKER: /usr/sbin/lctl mark 1 : Starting failover on mds1 Lustre: DEBUG MARKER: 1 : Starting failover on mds1 Lustre: DEBUG MARKER: grep -c /mnt/lustre-mds1' ' /proc/mounts || true Lustre: DEBUG MARKER: umount -d /mnt/lustre-mds1 Lustre: Failing over lustre-MDT0000 | Link to test |
| large-scale test 3a: recovery time, 2 clients | LustreError: 422946:0:(lquota_entry.c:118:lqe_iter_cb()) ASSERTION( lqe->u.se.lse_pending_write == 0 ) failed: LustreError: 422946:0:(lquota_entry.c:118:lqe_iter_cb()) LBUG CPU: 1 PID: 422946 Comm: umount Kdump: loaded Tainted: G OE ------- --- 5.14.0-427.42.1_lustre.el9.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: <TASK> dump_stack_lvl+0x34/0x48 ? __pfx_lqe_iter_cb+0x10/0x10 [lquota] lbug_with_loc.cold+0x5/0x43 [libcfs] lqe_iter_cb+0x134/0x140 [lquota] cfs_hash_for_each_tight+0xeb/0x2d0 [obdclass] lquota_site_free+0xf1/0x2c0 [lquota] qsd_qtype_fini+0x8b/0x420 [lquota] qsd_fini+0x1fb/0x460 [lquota] osd_shutdown+0x43/0x110 [osd_ldiskfs] osd_process_config+0x21d/0x3c0 [osd_ldiskfs] lod_process_config+0x40f/0xfc0 [lod] mdd_process_config+0xaf/0x450 [mdd] mdt_stack_fini+0x498/0x700 [mdt] mdt_fini+0x305/0x580 [mdt] mdt_device_fini+0x2b/0xc0 [mdt] obd_precleanup+0xf3/0x220 [obdclass] ? class_disconnect_exports+0x193/0x300 [obdclass] class_cleanup+0x2d7/0x600 [obdclass] class_process_config+0x1102/0x1ab0 [obdclass] class_manual_cleanup+0x43b/0x7a0 [obdclass] server_put_super+0x998/0xb30 [ptlrpc] generic_shutdown_super+0x74/0x120 kill_anon_super+0x14/0x30 deactivate_locked_super+0x31/0xa0 cleanup_mnt+0x100/0x160 task_work_run+0x5c/0x90 exit_to_user_mode_loop+0x122/0x130 exit_to_user_mode_prepare+0xb6/0x100 syscall_exit_to_user_mode+0x12/0x40 do_syscall_64+0x69/0x90 ? syscall_exit_work+0x103/0x130 ? syscall_exit_to_user_mode+0x19/0x40 ? do_syscall_64+0x69/0x90 ? do_syscall_64+0x69/0x90 entry_SYSCALL_64_after_hwframe+0x77/0xe1 RIP: 0033:0x7f9d9c30df0b | Lustre: DEBUG MARKER: /usr/sbin/lctl mark 1 : Starting failover on mds1 Lustre: DEBUG MARKER: 1 : Starting failover on mds1 Lustre: DEBUG MARKER: grep -c /mnt/lustre-mds1' ' /proc/mounts || true Lustre: DEBUG MARKER: umount -d /mnt/lustre-mds1 Lustre: Failing over lustre-MDT0000 | Link to test |