| Match messages in logs (every line would be required to be present in log output Copy from "Messages before crash" column below): | |
| Match messages in full crash (every line would be required to be present in crash log output Copy from "Full Crash" column below): | |
| Limit to a test: (Copy from below "Failing text"): | |
| Delete these reports as invalid (real bug in review or some such) | |
| Bug or comment: | |
| Extra info: |
| Failing Test | Full Crash | Messages before crash | Comment |
|---|---|---|---|
| sanity-sec test 26: test transferring very large nodemap | LustreError: 15557:0:(nodemap_handler.c:4766:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 15557:0:(nodemap_handler.c:4766:nodemap_config_set_active()) LBUG CPU: 0 PID: 15557 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.89.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-2.el9_5.1 04/01/2014 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.6+0x5/0x43 [libcfs] nodemap_config_set_active+0x31c/0x330 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x270 [ptlrpc] ? srso_alias_return_thunk+0x5/0xfcdfd mgc_process_nodemap_log+0x410/0xc00 [mgc] mgc_process_log+0xd23/0xe40 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_log+0xe40/0xe40 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x1f/0x40 | Autotest: Test running for 50 minutes (lustre-reviews_custom_121179.1001) Autotest: Test running for 55 minutes (lustre-reviews_custom_121179.1001) Autotest: Test running for 60 minutes (lustre-reviews_custom_121179.1001) Autotest: Test running for 65 minutes (lustre-reviews_custom_121179.1001) Autotest: Test running for 70 minutes (lustre-reviews_custom_121179.1001) LNet: Host 10.240.28.40 reset our connection while we were sending data; it may have rebooted: rc = -104 Lustre: 14066:0:(client.c:2478:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1770029344/real 1770029344] req@ff35b4d5b79d7a80 x1856005954179584/t0(0) o400->lustre-MDT0001-lwp-OST0001@10.240.28.40@tcp:12/10 lens 224/224 e 0 to 1 dl 1770029360 ref 1 fl Rpc:eXNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 projid:4294967295 Lustre: lustre-MDT0001-lwp-OST0001: Connection to lustre-MDT0001 (at 10.240.28.40@tcp) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 7 previous similar messages Autotest: Killing test framework, node(s) in the cluster crashed (lustre-reviews_custom_121179.1001) Lustre: lustre-MDT0001-lwp-OST0004: Connection to lustre-MDT0001 (at 10.240.28.40@tcp) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 7 previous similar messages LNet: 1 local NIs in recovery (showing 1): 10.240.28.38@tcp Autotest: Sleeping to ensure other nodes in the cluster have not crashed (lustre-reviews_custom_121179.1001) Lustre: lustre-OST0007: haven't heard from client lustre-MDT0001-mdtlov_UUID (at 10.240.28.40@tcp) in 103 seconds. I think it's dead, and I am evicting it. exp ff35b4d5b6b0a800, cur 1770029434 deadline 1770029431 last 1770029331 | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 15522:0:(nodemap_handler.c:4766:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 15522:0:(nodemap_handler.c:4766:nodemap_config_set_active()) LBUG CPU: 1 PID: 15522 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.89.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-2.el9_5.1 04/01/2014 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.6+0x5/0x43 [libcfs] nodemap_config_set_active+0x31c/0x330 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x270 [ptlrpc] ? srso_alias_return_thunk+0x5/0xfcdfd mgc_process_nodemap_log+0x410/0xc00 [mgc] mgc_process_log+0xd23/0xe40 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_log+0xe40/0xe40 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x1f/0x40 | Autotest: Test running for 50 minutes (lustre-reviews_custom_121177.1003) Autotest: Test running for 55 minutes (lustre-reviews_custom_121177.1003) Autotest: Test running for 60 minutes (lustre-reviews_custom_121177.1003) Autotest: Test running for 65 minutes (lustre-reviews_custom_121177.1003) LNet: Host 10.240.46.219 reset our connection while we were sending data; it may have rebooted: rc = -104 Lustre: 14030:0:(client.c:2478:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1770028642/real 1770028642] req@ff4f87aa444a89c0 x1856005346932608/t0(0) o400->lustre-MDT0003-lwp-OST0000@10.240.46.219@tcp:12/10 lens 224/224 e 0 to 1 dl 1770028658 ref 1 fl Rpc:eXNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 projid:4294967295 Lustre: 14030:0:(client.c:2478:ptlrpc_expire_one_request()) Skipped 1 previous similar message Lustre: lustre-MDT0003-lwp-OST0000: Connection to lustre-MDT0003 (at 10.240.46.219@tcp) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 7 previous similar messages Autotest: Killing test framework, node(s) in the cluster crashed (lustre-reviews_custom_121177.1003) Lustre: 14031:0:(client.c:2478:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1770028637/real 1770028637] req@ff4f87aa75c98000 x1856005346928640/t0(0) o400->lustre-MDT0001-lwp-OST0001@10.240.46.219@tcp:12/10 lens 224/224 e 0 to 1 dl 1770028653 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 projid:4294967295 Lustre: 14031:0:(client.c:2478:ptlrpc_expire_one_request()) Skipped 15 previous similar messages Autotest: Test running for 70 minutes (lustre-reviews_custom_121177.1003) Autotest: Sleeping to ensure other nodes in the cluster have not crashed (lustre-reviews_custom_121177.1003) Lustre: lustre-OST0006: haven't heard from client lustre-MDT0001-mdtlov_UUID (at 10.240.46.219@tcp) in 101 seconds. I think it's dead, and I am evicting it. exp ff4f87aa75925000, cur 1770028734 deadline 1770028733 last 1770028633 Lustre: lustre-OST0007: haven't heard from client lustre-MDT0001-mdtlov_UUID (at 10.240.46.219@tcp) in 101 seconds. I think it's dead, and I am evicting it. exp ff4f87aa75923800, cur 1770028734 deadline 1770028733 last 1770028633 Lustre: Skipped 3 previous similar messages Lustre: lustre-OST0001: haven't heard from client lustre-MDT0003-mdtlov_UUID (at 10.240.46.219@tcp) in 102 seconds. I think it's dead, and I am evicting it. exp ff4f87aa72eea800, cur 1770028736 deadline 1770028733 last 1770028634 Lustre: Skipped 3 previous similar messages | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 779187:0:(nodemap_handler.c:4766:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 779187:0:(nodemap_handler.c:4766:nodemap_config_set_active()) LBUG CPU: 0 PID: 779187 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.89.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-2.el9_5.1 04/01/2014 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.6+0x5/0x43 [libcfs] nodemap_config_set_active+0x31c/0x330 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x270 [ptlrpc] ? srso_alias_return_thunk+0x5/0xfcdfd mgc_process_nodemap_log+0x410/0xc00 [mgc] mgc_process_log+0xd23/0xe40 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_log+0xe40/0xe40 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x1f/0x40 | Autotest: Test running for 435 minutes (lustre-master_full-part-2_4697.65) Autotest: Test running for 440 minutes (lustre-master_full-part-2_4697.65) Autotest: Test running for 445 minutes (lustre-master_full-part-2_4697.65) | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 737612:0:(nodemap_handler.c:4460:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 737612:0:(nodemap_handler.c:4460:nodemap_config_set_active()) LBUG CPU: 1 PID: 737612 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.76.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.6+0x5/0x43 [libcfs] nodemap_config_set_active+0x2b0/0x2c0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x270 [ptlrpc] mgc_process_nodemap_log+0x410/0xc00 [mgc] mgc_process_log+0xd23/0xe40 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_log+0xe40/0xe40 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Autotest: Test running for 490 minutes (lustre-master_full-part-2_4675.50) | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 850068:0:(nodemap_handler.c:4452:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 850068:0:(nodemap_handler.c:4452:nodemap_config_set_active()) LBUG CPU: 1 PID: 850068 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.76.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 1.16.3-2.el9_5.1 04/01/2014 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.6+0x5/0x43 [libcfs] nodemap_config_set_active+0x2b0/0x2c0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x270 [ptlrpc] ? srso_alias_return_thunk+0x5/0xfcdfd mgc_process_nodemap_log+0x410/0xc00 [mgc] mgc_process_log+0xd23/0xe40 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_log+0xe40/0xe40 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x1f/0x40 | Autotest: Test running for 485 minutes (lustre-master_full-dne-part-2_4668.8) | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 855504:0:(nodemap_handler.c:4452:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 855504:0:(nodemap_handler.c:4452:nodemap_config_set_active()) LBUG CPU: 0 PID: 855504 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.76.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.6+0x5/0x43 [libcfs] nodemap_config_set_active+0x2b0/0x2c0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x270 [ptlrpc] mgc_process_nodemap_log+0x425/0xc10 [mgc] mgc_process_log+0xda5/0xec0 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_log+0xec0/0xec0 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Autotest: Test running for 580 minutes (lustre-master_full-dne-part-2_4660.8) LustreError: 861828:0:(qsd_reint.c:618:qqi_reint_delayed()) lustre-OST0003: Delaying reintegration for qtype:0 until pending updates are flushed. LustreError: 861828:0:(qsd_reint.c:618:qqi_reint_delayed()) Skipped 21 previous similar messages | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 12142:0:(nodemap_handler.c:4443:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 12142:0:(nodemap_handler.c:4443:nodemap_config_set_active()) LBUG CPU: 1 PID: 12142 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.71.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 1.16.3-2.el9_5.1 04/01/2014 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.6+0x5/0x43 [libcfs] nodemap_config_set_active+0x2b0/0x2c0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x270 [ptlrpc] ? srso_alias_return_thunk+0x5/0xfcdfd mgc_process_nodemap_log+0x421/0xc10 [mgc] mgc_process_log+0xd51/0xee0 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_log+0xee0/0xee0 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x1f/0x40 | Autotest: Test running for 110 minutes (lustre-master-next_full-part-2_921.2) | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 13662:0:(nodemap_handler.c:4410:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 13662:0:(nodemap_handler.c:4410:nodemap_config_set_active()) LBUG CPU: 1 PID: 13662 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.71.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.6+0x5/0x43 [libcfs] nodemap_config_set_active+0x2b0/0x2c0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x270 [ptlrpc] mgc_process_nodemap_log+0x425/0xc10 [mgc] mgc_process_log+0xd51/0xee0 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_log+0xee0/0xee0 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Autotest: Test running for 155 minutes (lustre-master-next_full-dne-part-2_917.182) | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 6283:0:(nodemap_handler.c:4410:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 6283:0:(nodemap_handler.c:4410:nodemap_config_set_active()) LBUG CPU: 0 PID: 6283 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.71.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.6+0x5/0x43 [libcfs] nodemap_config_set_active+0x2b0/0x2c0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x270 [ptlrpc] mgc_process_nodemap_log+0x425/0xc10 [mgc] mgc_process_log+0xd51/0xee0 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_log+0xee0/0xee0 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Autotest: Test running for 155 minutes (lustre-master-next_full-dne-part-2_917.182) LNet: Host 10.240.30.173 reset our connection while we were sending data; it may have rebooted: rc = -104 Lustre: 6239:0:(client.c:2464:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1757403443/real 1757403443] req@ffff9d9b22846a40 x1842761128366976/t0(0) o400->lustre-OST0005-osc-MDT0001@10.240.30.173@tcp:28/4 lens 224/224 e 0 to 1 dl 1757403459 ref 1 fl Rpc:eXNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 projid:4294967295 Lustre: lustre-OST0005-osc-MDT0001: Connection to lustre-OST0005 (at 10.240.30.173@tcp) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 11 previous similar messages Lustre: 6240:0:(client.c:2464:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1757403438/real 1757403438] req@ffff9d9c031ded80 x1842761128364032/t0(0) o13->lustre-OST0001-osc-MDT0001@10.240.30.173@tcp:7/4 lens 224/368 e 0 to 1 dl 1757403454 ref 1 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp-pre-1-1.0' uid:0 gid:0 projid:4294967295 Lustre: 6240:0:(client.c:2464:ptlrpc_expire_one_request()) Skipped 31 previous similar messages Autotest: Killing test framework, node(s) in the cluster crashed (lustre-master-next_full-dne-part-2_917.182) Autotest: Sleeping to ensure other nodes in the cluster have not crashed (lustre-master-next_full-dne-part-2_917.182) Autotest: Test running for 160 minutes (lustre-master-next_full-dne-part-2_917.182) Autotest: onyx-147vm2 crashed during sanity-sec (lustre-master-next_full-dne-part-2_917.182) Lustre: lustre-MDT0003: haven't heard from client lustre-MDT0003-lwp-OST0000_UUID (at 10.240.30.173@tcp) in 102 seconds. I think it's dead, and I am evicting it. exp ffff9d9c389d6000, cur 1757403534 deadline 1757403532 last 1757403432 | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 12448:0:(nodemap_handler.c:4388:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 12448:0:(nodemap_handler.c:4388:nodemap_config_set_active()) LBUG CPU: 1 PID: 12448 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.58.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.6+0x5/0x43 [libcfs] nodemap_config_set_active+0x2b0/0x2c0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x270 [ptlrpc] mgc_process_nodemap_log+0x425/0xc10 [mgc] mgc_process_log+0xd51/0xee0 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_log+0xee0/0xee0 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Autotest: Test running for 220 minutes (lustre-master_full-part-2_4648.32) | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 13646:0:(nodemap_handler.c:2826:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 13646:0:(nodemap_handler.c:2826:nodemap_config_set_active()) LBUG CPU: 0 PID: 13646 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.50.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.8+0x5/0x43 [libcfs] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x220 [ptlrpc] mgc_process_nodemap_log+0x433/0xcb0 [mgc] mgc_process_log+0xcfc/0xf70 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_config+0xe00/0xe00 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Link to test | |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 6292:0:(nodemap_handler.c:2826:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 6292:0:(nodemap_handler.c:2826:nodemap_config_set_active()) LBUG CPU: 0 PID: 6292 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.50.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.8+0x5/0x43 [libcfs] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x220 [ptlrpc] mgc_process_nodemap_log+0x433/0xcb0 [mgc] mgc_process_log+0xcfc/0xf70 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_config+0xe00/0xe00 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Autotest: Test running for 140 minutes (lustre-master_full-dne-part-2_4627.164) LNet: Host 10.240.26.185 reset our connection while we were sending data; it may have rebooted: rc = -104 Lustre: 6240:0:(client.c:2451:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1746887817/real 1746887817] req@ffff8f8cd8901d40 x1831735884103936/t0(0) o400->lustre-OST0000-osc-MDT0001@10.240.26.185@tcp:28/4 lens 224/224 e 0 to 1 dl 1746887833 ref 1 fl Rpc:eXNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 projid:4294967295 Lustre: lustre-OST0003-osc-MDT0001: Connection to lustre-OST0003 (at 10.240.26.185@tcp) was lost; in progress operations using this service will wait for recovery to complete Lustre: 6240:0:(client.c:2451:ptlrpc_expire_one_request()) Skipped 1 previous similar message Lustre: Skipped 3 previous similar messages Autotest: Killing test framework, node(s) in the cluster crashed (lustre-master_full-dne-part-2_4627.164) Lustre: 6239:0:(client.c:2451:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1746887809/real 1746887809] req@ffff8f8d0428e3c0 x1831735884097408/t0(0) o13->lustre-OST0001-osc-MDT0001@10.240.26.185@tcp:7/4 lens 224/368 e 0 to 1 dl 1746887825 ref 1 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp-pre-1-1.0' uid:0 gid:0 projid:4294967295 Lustre: 6239:0:(client.c:2451:ptlrpc_expire_one_request()) Skipped 16 previous similar messages Lustre: 6239:0:(client.c:2451:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1746887812/real 1746887812] req@ffff8f8d0428f400 x1831735884098048/t0(0) o13->lustre-OST0006-osc-MDT0003@10.240.26.185@tcp:7/4 lens 224/368 e 0 to 1 dl 1746887828 ref 1 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp-pre-6-3.0' uid:0 gid:0 projid:4294967295 Lustre: 6239:0:(client.c:2451:ptlrpc_expire_one_request()) Skipped 1 previous similar message Lustre: 6240:0:(client.c:2451:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1746887814/real 1746887814] req@ffff8f8ce7f26a40 x1831735884102656/t0(0) o13->lustre-OST0004-osc-MDT0001@10.240.26.185@tcp:7/4 lens 224/368 e 0 to 1 dl 1746887830 ref 1 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp-pre-4-1.0' uid:0 gid:0 projid:4294967295 Lustre: 6240:0:(client.c:2451:ptlrpc_expire_one_request()) Skipped 25 previous similar messages Autotest: Sleeping to ensure other nodes in the cluster have not crashed (lustre-master_full-dne-part-2_4627.164) Lustre: lustre-MDT0003: haven't heard from client lustre-MDT0003-lwp-OST0000_UUID (at 10.240.26.185@tcp) in 101 seconds. I think it's dead, and I am evicting it. exp ffff8f8cf0cac000, cur 1746887907 deadline 1746887906 last 1746887806 | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 13199:0:(nodemap_handler.c:2272:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 13199:0:(nodemap_handler.c:2272:nodemap_config_set_active()) LBUG CPU: 1 PID: 13199 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.40.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.8+0x5/0x43 [libcfs] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] mgc_process_nodemap_log+0x435/0xcb0 [mgc] mgc_process_log+0xcfc/0xf70 [mgc] mgc_requeue_thread+0x29c/0x700 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_config+0xe00/0xe00 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Link to test | |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 6315:0:(nodemap_handler.c:2253:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 6315:0:(nodemap_handler.c:2253:nodemap_config_set_active()) LBUG CPU: 0 PID: 6315 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.27.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.8+0x5/0x58 [libcfs] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] mgc_process_nodemap_log+0x435/0xcb0 [mgc] mgc_process_log+0xcfc/0xf70 [mgc] mgc_requeue_thread+0x299/0x710 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_config+0xe00/0xe00 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Autotest: Test running for 250 minutes (lustre-master_full-dne-part-2_4598.104) Autotest: Killing test framework, node(s) in the cluster crashed (lustre-master_full-dne-part-2_4598.104) LNet: Host 10.240.27.23 reset our connection while we were sending data; it may have rebooted: rc = -104 Lustre: 6271:0:(client.c:2364:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1736077143/real 1736077143] req@ffff9eb86b1749c0 x1820392915732224/t0(0) o400->lustre-OST0005-osc-MDT0001@10.240.27.23@tcp:28/4 lens 224/224 e 0 to 1 dl 1736077159 ref 1 fl Rpc:eXNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 Lustre: lustre-OST0001-osc-MDT0001: Connection to lustre-OST0001 (at 10.240.27.23@tcp) was lost; in progress operations using this service will wait for recovery to complete Lustre: 6271:0:(client.c:2364:ptlrpc_expire_one_request()) Skipped 1 previous similar message Lustre: Skipped 3 previous similar messages Autotest: Sleeping to ensure other nodes in the cluster have not crashed (lustre-master_full-dne-part-2_4598.104) Lustre: 6272:0:(client.c:2364:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1736077133/real 1736077133] req@ffff9eb86bd67400 x1820392915721088/t0(0) o13->lustre-OST0003-osc-MDT0003@10.240.27.23@tcp:7/4 lens 224/368 e 0 to 1 dl 1736077149 ref 1 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp-pre-3-3.0' uid:0 gid:0 Lustre: 6272:0:(client.c:2364:ptlrpc_expire_one_request()) Skipped 6 previous similar messages Lustre: 6272:0:(client.c:2364:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1736077134/real 1736077134] req@ffff9eb86bd65a00 x1820392915725952/t0(0) o13->lustre-OST0006-osc-MDT0001@10.240.27.23@tcp:7/4 lens 224/368 e 0 to 1 dl 1736077150 ref 1 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp-pre-6-1.0' uid:0 gid:0 Lustre: 6272:0:(client.c:2364:ptlrpc_expire_one_request()) Skipped 26 previous similar messages Lustre: 6271:0:(client.c:2364:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1736077138/real 1736077138] req@ffff9eb86f8e6a40 x1820392915728000/t0(0) o400->lustre-OST0000-osc-MDT0001@10.240.27.23@tcp:28/4 lens 224/224 e 0 to 1 dl 1736077154 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 Lustre: 6271:0:(client.c:2364:ptlrpc_expire_one_request()) Skipped 4 previous similar messages Lustre: 6272:0:(client.c:2364:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1736077143/real 0] req@ffff9eb86b177a80 x1820392915732096/t0(0) o400->lustre-OST0004-osc-MDT0001@10.240.27.23@tcp:28/4 lens 224/224 e 0 to 1 dl 1736077159 ref 2 fl Rpc:XNr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 Lustre: 6272:0:(client.c:2364:ptlrpc_expire_one_request()) Skipped 17 previous similar messages Lustre: lustre-MDT0003: haven't heard from client lustre-MDT0003-lwp-OST0000_UUID (at 10.240.27.23@tcp) in 32 seconds. I think it's dead, and I am evicting it. exp ffff9eb86ca00c00, cur 1736077161 expire 1736077131 last 1736077129 Lustre: lustre-MDT0001: haven't heard from client lustre-MDT0001-lwp-OST0000_UUID (at 10.240.27.23@tcp) in 33 seconds. I think it's dead, and I am evicting it. exp ffff9eb86daf6400, cur 1736077162 expire 1736077132 last 1736077129 Lustre: Skipped 7 previous similar messages | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 753302:0:(nodemap_handler.c:2001:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 753302:0:(nodemap_handler.c:2001:nodemap_config_set_active()) LBUG CPU: 1 PID: 753302 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE --------- - - 4.18.0-477.27.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.8+0x5/0x58 [libcfs] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] mgc_process_nodemap_log+0x430/0xc80 [mgc] mgc_process_log+0xcf6/0xf50 [mgc] mgc_requeue_thread+0x299/0x710 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_config+0xe00/0xe00 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Autotest: Killing test framework, node(s) in the cluster crashed (lustre-master-next_full-dne-part-2_841.158) Autotest: Sleeping to ensure other nodes in the cluster have not crashed (lustre-master-next_full-dne-part-2_841.158) Lustre: 11037:0:(client.c:2364:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1732559404/real 1732559404] req@ffff95b5518b9380 x1816664508870144/t0(0) o400->lustre-OST0001-osc-MDT0001@10.240.39.127@tcp:28/4 lens 224/224 e 0 to 1 dl 1732559420 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 10.240.39.127@tcp) was lost; in progress operations using this service will wait for recovery to complete Lustre: 11037:0:(client.c:2364:ptlrpc_expire_one_request()) Skipped 7 previous similar messages Lustre: Skipped 1 previous similar message Lustre: 11037:0:(client.c:2364:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1732559405/real 1732559405] req@ffff95b5518ba3c0 x1816664508872960/t0(0) o13->lustre-OST0001-osc-MDT0001@10.240.39.127@tcp:7/4 lens 224/368 e 0 to 1 dl 1732559421 ref 1 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp-pre-1-1.0' uid:0 gid:0 Lustre: 11037:0:(client.c:2364:ptlrpc_expire_one_request()) Skipped 14 previous similar messages Lustre: 11037:0:(client.c:2364:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1732559409/real 1732559409] req@ffff95b5402c1040 x1816664508874496/t0(0) o400->lustre-OST0001-osc-MDT0001@10.240.39.127@tcp:28/4 lens 224/224 e 0 to 1 dl 1732559425 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 Lustre: 11037:0:(client.c:2364:ptlrpc_expire_one_request()) Skipped 18 previous similar messages Lustre: 11038:0:(client.c:2364:ptlrpc_expire_one_request()) @@@ Request sent has timed out for sent delay: [sent 1732559415/real 0] req@ffff95b5327c2080 x1816664508878464/t0(0) o400->lustre-OST0001-osc-MDT0001@10.240.39.127@tcp:28/4 lens 224/224 e 0 to 1 dl 1732559431 ref 2 fl Rpc:XNr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 Lustre: 11038:0:(client.c:2364:ptlrpc_expire_one_request()) Skipped 7 previous similar messages Lustre: lustre-MDT0001: haven't heard from client lustre-MDT0001-lwp-OST0000_UUID (at 10.240.39.127@tcp) in 31 seconds. I think it's dead, and I am evicting it. exp ffff95b506079c00, cur 1732559433 expire 1732559403 last 1732559402 | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 832953:0:(nodemap_handler.c:1918:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 832953:0:(nodemap_handler.c:1918:nodemap_config_set_active()) LBUG CPU: 0 PID: 832953 Comm: ll_cfg_requeue Kdump: loaded Tainted: G W OE --------- - - 4.18.0-513.24.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.8+0x5/0x58 [libcfs] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] mgc_process_nodemap_log+0x435/0xcb0 [mgc] mgc_process_log+0xce8/0xf60 [mgc] mgc_requeue_thread+0x299/0x710 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_config+0xe00/0xe00 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Autotest: Test running for 525 minutes (lustre-master_full-dne-part-2_4586.20) | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 6336:0:(nodemap_handler.c:1918:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 6336:0:(nodemap_handler.c:1918:nodemap_config_set_active()) LBUG CPU: 1 PID: 6336 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.16.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.8+0x5/0x58 [libcfs] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] mgc_process_nodemap_log+0x435/0xcb0 [mgc] mgc_process_log+0xce8/0xf60 [mgc] mgc_requeue_thread+0x299/0x710 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_config+0xe00/0xe00 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Autotest: Test running for 195 minutes (lustre-master_full-dne-part-2_4582.32) | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 715702:0:(nodemap_handler.c:1918:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 715702:0:(nodemap_handler.c:1918:nodemap_config_set_active()) LBUG CPU: 0 PID: 715702 Comm: ll_cfg_requeue Kdump: loaded Tainted: G W OE --------- - - 4.18.0-477.27.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.8+0x5/0x58 [libcfs] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] mgc_process_nodemap_log+0x430/0xc80 [mgc] mgc_process_log+0xce2/0xf30 [mgc] mgc_requeue_thread+0x299/0x710 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_config+0xe00/0xe00 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Autotest: Test running for 495 minutes (lustre-master_full-part-2_4576.2) | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 477830:0:(nodemap_handler.c:1844:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 477830:0:(nodemap_handler.c:1844:nodemap_config_set_active()) LBUG CPU: 1 PID: 477830 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE --------- - - 4.18.0-425.10.1.el8_7.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.8+0x5/0x43 [libcfs] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] mgc_process_nodemap_log+0x683/0xd40 [mgc] mgc_process_log+0xd14/0xef0 [mgc] mgc_requeue_thread+0x299/0x710 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_config+0xdb0/0xdb0 [mgc] kthread+0x10b/0x130 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Link to test | |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 1102626:0:(nodemap_handler.c:1844:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 1102626:0:(nodemap_handler.c:1844:nodemap_config_set_active()) LBUG CPU: 1 PID: 1102626 Comm: ll_cfg_requeue Kdump: loaded Tainted: G OE --------- - - 4.18.0-477.27.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x41/0x60 lbug_with_loc.cold.8+0x5/0x43 [libcfs] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] mgc_process_nodemap_log+0x683/0xd40 [mgc] mgc_process_log+0xd14/0xef0 [mgc] mgc_requeue_thread+0x299/0x710 [mgc] ? finish_wait+0x80/0x80 ? mgc_process_config+0xdb0/0xdb0 [mgc] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 | Lustre: 10691:0:(client.c:2338:ptlrpc_expire_one_request()) @@@ Request sent has failed due to network error: [sent 1709658549/real 1709658549] req@ffff97ff3146e700 x1792647161032256/t0(0) o400->lustre-OST0000-osc-MDT0001@10.240.42.120@tcp:28/4 lens 224/224 e 0 to 1 dl 1709658565 ref 1 fl Rpc:eXNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 Lustre: lustre-OST0000-osc-MDT0001: Connection to lustre-OST0000 (at 10.240.42.120@tcp) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 3 previous similar messages Lustre: 10691:0:(client.c:2338:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709658543/real 1709658543] req@ffff97ff5c549d40 x1792647161028864/t0(0) o13->lustre-OST0003-osc-MDT0001@10.240.42.120@tcp:7/4 lens 224/368 e 0 to 1 dl 1709658559 ref 1 fl Rpc:XQr/200/ffffffff rc 0/-1 job:'osp-pre-3-1.0' uid:0 gid:0 Lustre: 10691:0:(client.c:2338:ptlrpc_expire_one_request()) Skipped 15 previous similar messages Lustre: 10692:0:(client.c:2338:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1709658544/real 1709658544] req@ffff97ff2fe55a00 x1792647161030144/t0(0) o400->lustre-OST0000-osc-MDT0001@10.240.42.120@tcp:28/4 lens 224/224 e 0 to 1 dl 1709658560 ref 1 fl Rpc:XNQr/200/ffffffff rc 0/-1 job:'kworker.0' uid:0 gid:0 Lustre: 10692:0:(client.c:2338:ptlrpc_expire_one_request()) Skipped 7 previous similar messages Lustre: lustre-MDT0003: haven't heard from client lustre-MDT0003-lwp-OST0000_UUID (at 10.240.42.120@tcp) in 33 seconds. I think it's dead, and I am evicting it. exp ffff97ff464c1400, cur 1709658570 expire 1709658540 last 1709658537 Lustre: lustre-MDT0001: haven't heard from client lustre-MDT0001-lwp-OST0000_UUID (at 10.240.42.120@tcp) in 35 seconds. I think it's dead, and I am evicting it. exp ffff97ff427f4000, cur 1709658572 expire 1709658542 last 1709658537 Lustre: Skipped 7 previous similar messages | Link to test |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 12495:0:(nodemap_handler.c:1821:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 12495:0:(nodemap_handler.c:1821:nodemap_config_set_active()) LBUG Pid: 12495, comm: ll_cfg_requeue 4.18.0-477.15.1.el8_lustre.x86_64 #1 SMP Fri Sep 1 20:56:46 UTC 2023 Call Trace TBD: [<0>] libcfs_call_trace+0x6f/0xa0 [libcfs] [<0>] lbug_with_loc+0x3f/0x70 [libcfs] [<0>] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] [<0>] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] [<0>] mgc_process_nodemap_log+0x683/0xd50 [mgc] [<0>] mgc_process_log+0xc44/0xe10 [mgc] [<0>] mgc_requeue_thread+0x29e/0x740 [mgc] [<0>] kthread+0x134/0x150 [<0>] ret_from_fork+0x35/0x40 | Link to test | |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 2098640:0:(nodemap_handler.c:1819:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 2098640:0:(nodemap_handler.c:1819:nodemap_config_set_active()) LBUG Pid: 2098640, comm: ll_cfg_requeue 4.18.0-425.10.1.el8_7.x86_64 #1 SMP Wed Dec 14 16:00:01 EST 2022 Call Trace TBD: [<0>] libcfs_call_trace+0x6f/0xa0 [libcfs] [<0>] lbug_with_loc+0x3f/0x70 [libcfs] [<0>] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] [<0>] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] [<0>] mgc_process_nodemap_log+0x683/0xd50 [mgc] [<0>] mgc_process_log+0xc44/0xe10 [mgc] [<0>] mgc_requeue_thread+0x29e/0x740 [mgc] [<0>] kthread+0x10b/0x130 [<0>] ret_from_fork+0x35/0x40 | Link to test | |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 824496:0:(nodemap_handler.c:1819:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 824496:0:(nodemap_handler.c:1819:nodemap_config_set_active()) LBUG Pid: 824496, comm: ll_cfg_requeue 4.18.0-425.10.1.el8_lustre.x86_64 #1 SMP Wed Apr 5 05:10:27 UTC 2023 Call Trace TBD: [<0>] libcfs_call_trace+0x6f/0xa0 [libcfs] [<0>] lbug_with_loc+0x3f/0x70 [libcfs] [<0>] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] [<0>] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] [<0>] mgc_process_nodemap_log+0x683/0xd50 [mgc] [<0>] mgc_process_log+0xc44/0xe10 [mgc] [<0>] mgc_requeue_thread+0x29e/0x740 [mgc] [<0>] kthread+0x10b/0x130 [<0>] ret_from_fork+0x35/0x40 | Link to test | |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 810581:0:(nodemap_handler.c:1819:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 810581:0:(nodemap_handler.c:1819:nodemap_config_set_active()) LBUG Pid: 810581, comm: ll_cfg_requeue 4.18.0-425.10.1.el8_lustre.x86_64 #1 SMP Wed Apr 5 04:55:50 UTC 2023 Call Trace TBD: [<0>] libcfs_call_trace+0x6f/0xa0 [libcfs] [<0>] lbug_with_loc+0x3f/0x70 [libcfs] [<0>] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] [<0>] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] [<0>] mgc_process_nodemap_log+0x683/0xd50 [mgc] [<0>] mgc_process_log+0xc44/0xe10 [mgc] [<0>] mgc_requeue_thread+0x29e/0x740 [mgc] [<0>] kthread+0x10b/0x130 [<0>] ret_from_fork+0x35/0x40 | Link to test | |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 673409:0:(nodemap_handler.c:1773:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 673409:0:(nodemap_handler.c:1773:nodemap_config_set_active()) LBUG Pid: 673409, comm: ll_cfg_requeue 4.18.0-372.32.1.el8_lustre.x86_64 #1 SMP Wed Jan 4 16:54:23 UTC 2023 Call Trace TBD: [<0>] libcfs_call_trace+0x6f/0xa0 [libcfs] [<0>] lbug_with_loc+0x3f/0x70 [libcfs] [<0>] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] [<0>] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] [<0>] mgc_process_nodemap_log+0x683/0xd50 [mgc] [<0>] mgc_process_log+0xc44/0xe10 [mgc] [<0>] mgc_requeue_thread+0x29e/0x740 [mgc] [<0>] kthread+0x10a/0x120 [<0>] ret_from_fork+0x35/0x40 | Link to test | |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 1326293:0:(nodemap_handler.c:1774:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 1326293:0:(nodemap_handler.c:1774:nodemap_config_set_active()) LBUG Pid: 1326293, comm: ll_cfg_requeue 4.18.0-240.22.1.el8_3.x86_64 #1 SMP Thu Apr 8 19:01:30 UTC 2021 Call Trace TBD: [<0>] libcfs_call_trace+0x6f/0x90 [libcfs] [<0>] lbug_with_loc+0x43/0x80 [libcfs] [<0>] nodemap_config_set_active+0x2a6/0x2b0 [ptlrpc] [<0>] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] [<0>] mgc_process_nodemap_log+0x674/0xd20 [mgc] [<0>] mgc_process_log+0xc40/0xe10 [mgc] [<0>] mgc_requeue_thread+0x29e/0x740 [mgc] [<0>] kthread+0x112/0x130 [<0>] ret_from_fork+0x35/0x40 | Link to test | |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 556089:0:(nodemap_handler.c:1774:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 556089:0:(nodemap_handler.c:1774:nodemap_config_set_active()) LBUG Pid: 556089, comm: ll_cfg_requeue 4.18.0-348.23.1.el8_lustre.x86_64 #1 SMP Sat Dec 24 04:17:20 UTC 2022 Call Trace TBD: [<0>] libcfs_call_trace+0x6f/0x90 [libcfs] [<0>] lbug_with_loc+0x43/0x80 [libcfs] [<0>] nodemap_config_set_active+0x2a6/0x2b0 [ptlrpc] [<0>] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] [<0>] mgc_process_nodemap_log+0x674/0xd20 [mgc] [<0>] mgc_process_log+0xc40/0xe10 [mgc] [<0>] mgc_requeue_thread+0x29e/0x740 [mgc] [<0>] kthread+0x116/0x130 [<0>] ret_from_fork+0x35/0x40 | Link to test | |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 674610:0:(nodemap_handler.c:1774:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 674610:0:(nodemap_handler.c:1774:nodemap_config_set_active()) LBUG Pid: 674610, comm: ll_cfg_requeue 4.18.0-425.3.1.el8_lustre.x86_64 #1 SMP Wed Nov 9 07:24:57 UTC 2022 Call Trace TBD: [<0>] libcfs_call_trace+0x6f/0xa0 [libcfs] [<0>] lbug_with_loc+0x3f/0x70 [libcfs] [<0>] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] [<0>] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] [<0>] mgc_process_nodemap_log+0x678/0xd20 [mgc] [<0>] mgc_process_log+0xc44/0xe10 [mgc] [<0>] mgc_requeue_thread+0x29e/0x740 [mgc] [<0>] kthread+0x10a/0x120 [<0>] ret_from_fork+0x35/0x40 | Link to test | |
| sanity-sec test 26: test transferring very large nodemap | LustreError: 675160:0:(nodemap_handler.c:1774:nodemap_config_set_active()) ASSERTION( config->nmc_default_nodemap ) failed: LustreError: 675160:0:(nodemap_handler.c:1774:nodemap_config_set_active()) LBUG Pid: 675160, comm: ll_cfg_requeue 4.18.0-372.32.1.el8_lustre.x86_64 #1 SMP Thu Oct 27 18:54:42 UTC 2022 Call Trace TBD: [<0>] libcfs_call_trace+0x6f/0xa0 [libcfs] [<0>] lbug_with_loc+0x3f/0x70 [libcfs] [<0>] nodemap_config_set_active+0x2aa/0x2b0 [ptlrpc] [<0>] nodemap_config_set_active_mgc+0x3a/0x210 [ptlrpc] [<0>] mgc_process_nodemap_log+0x678/0xd20 [mgc] [<0>] mgc_process_log+0xc44/0xe10 [mgc] [<0>] mgc_requeue_thread+0x29e/0x740 [mgc] [<0>] kthread+0x10a/0x120 [<0>] ret_from_fork+0x35/0x40 | Link to test |