Match messages in logs (every line would be required to be present in log output Copy from "Messages before crash" column below): | |
Match messages in full crash (every line would be required to be present in crash log output Copy from "Full Crash" column below): | |
Limit to a test: (Copy from below "Failing text"): | |
Delete these reports as invalid (real bug in review or some such) | |
Bug or comment: | |
Extra info: |
Failing Test | Full Crash | Messages before crash | Comment |
---|---|---|---|
racer test 1: racer on clients: oleg269-client.virtnet DURATION=300 | LustreError: 13253:0:(lod_object.c:5090:lod_xattr_set()) ASSERTION( (!!(!strcmp(name, "lustre.""lov") || !strcmp(name, "trusted.lov")) == !!(!lod_dt_obj(dt)->ldo_comp_cached)) ) failed: LustreError: 13253:0:(lod_object.c:5090:lod_xattr_set()) LBUG Pid: 13253, comm: mdt_rdpg00_003 3.10.0-7.9-debug #1 SMP Sat Mar 26 23:28:42 EDT 2022 Call Trace: [<0>] libcfs_call_trace+0x90/0xf0 [libcfs] [<0>] lbug_with_loc+0x4c/0xa0 [libcfs] [<0>] lod_xattr_set+0x1b13/0x1c90 [lod] [<0>] mdo_xattr_set+0xc0/0x4c0 [mdd] [<0>] mdd_xattr_set+0xf85/0x1200 [mdd] [<0>] mo_xattr_set+0x43/0x45 [mdt] [<0>] mdt_close_handle_layouts+0x9a4/0xee0 [mdt] [<0>] mdt_mfd_close+0x5b2/0xbb0 [mdt] [<0>] mdt_close_internal+0xb4/0x240 [mdt] [<0>] mdt_close+0x28c/0x970 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe | Lustre: 13249:0:(mdt_recovery.c:149:mdt_req_from_lrd()) @@@ restoring transno req@ffff8800a7a62d00 x1777033449396288/t4294967609(0) o101->7d988a7f-aff9-4a28-8aa3-b9f3fa9ec52d@192.168.201.69@tcp:495/0 lens 384/816 e 0 to 0 dl 1694711245 ref 1 fl Interpret:H/202/0 rc 0/0 job:'dd.0' uid:0 gid:0 LustreError: 6134:0:(mdt_handler.c:774:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000402:0x16:0x0] ACL: rc = -2 LustreError: 6133:0:(mdt_handler.c:774:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000401:0x2a4:0x0] ACL: rc = -2 Lustre: 13246:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x934:0x0] with magic=0xbd60bd0 LustreError: 8787:0:(mdt_handler.c:774:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000402:0xa55:0x0] ACL: rc = -2 | Link to test |
racer test 1: racer on clients: centos-85.localnet DURATION=2700 | LustreError: 24739:0:(lod_object.c:5090:lod_xattr_set()) ASSERTION( (!!(!strcmp(name, "lustre.""lov") || !strcmp(name, "trusted.lov")) == !!(!lod_dt_obj(dt)->ldo_comp_cached)) ) failed: LustreError: 24739:0:(lod_object.c:5090:lod_xattr_set()) LBUG Pid: 24739, comm: mdt_rdpg07_004 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] libcfs_call_trace+0x90/0xf0 [libcfs] [<0>] lbug_with_loc+0x4c/0xa0 [libcfs] [<0>] lod_xattr_set+0x1b13/0x1c90 [lod] [<0>] mdo_xattr_set+0xc0/0x4c0 [mdd] [<0>] mdd_xattr_set+0xf85/0x1200 [mdd] [<0>] mo_xattr_set+0x43/0x45 [mdt] [<0>] mdt_close_handle_layouts+0x9a4/0xee0 [mdt] [<0>] mdt_mfd_close+0x5b2/0xbb0 [mdt] [<0>] mdt_close_internal+0xb4/0x240 [mdt] [<0>] mdt_close+0x28c/0x970 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe | Lustre: 15389:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 552, rollback = 7 Lustre: 15389:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 15389:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 15389:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/552/0, punch: 0/0/0, quota 1/3/0 Lustre: 15389:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 15389:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 11016:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 11016:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 11016:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 11016:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 11016:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 11016:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 11016:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 11016:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 11016:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 11016:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 11016:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 11016:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 11025:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 11025:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 5 previous similar messages Lustre: 11025:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 11025:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 11025:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 11025:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 11025:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 11025:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 11025:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 11025:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 11025:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 11025:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 5 previous similar messages LustreError: 16572:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c4a88: inode [0x200000401:0x20:0x0] mdc close failed: rc = -13 Lustre: 11018:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 515 < left 618, rollback = 7 Lustre: 11018:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 11018:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 11018:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 11018:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 11018:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 11018:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 11018:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 11018:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 11018:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 11018:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 11018:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 10139:0:(mdt_recovery.c:149:mdt_req_from_lrd()) @@@ restoring transno req@ffff8802958fbac0 x1775657074202816/t4294970406(0) o101->44c8a2ad-cef5-446d-b453-c3cc3234292d@0@lo:34/0 lens 376/840 e 0 to 0 dl 1693398594 ref 1 fl Interpret:H/202/0 rc 0/0 job:'dd.0' uid:0 gid:0 Lustre: 10126:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0x16b:0x0] with magic=0xbd60bd0 Lustre: 11002:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 515 < left 618, rollback = 7 Lustre: 11002:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 17 previous similar messages Lustre: 11002:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 11002:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 17 previous similar messages Lustre: 11002:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 11002:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 17 previous similar messages Lustre: 11002:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 11002:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 17 previous similar messages Lustre: 11002:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 11002:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 17 previous similar messages Lustre: 11002:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 11002:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 17 previous similar messages Lustre: 11013:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 515 < left 618, rollback = 7 Lustre: 11013:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 23 previous similar messages Lustre: 11013:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 11013:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 11013:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 11013:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 11013:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 11013:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 11013:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 11013:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 11013:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 11013:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 11006:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 11006:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 55 previous similar messages Lustre: 11006:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 11006:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 55 previous similar messages Lustre: 11006:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 11006:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 55 previous similar messages Lustre: 11006:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 11006:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 55 previous similar messages Lustre: 11006:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 11006:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 55 previous similar messages Lustre: 11006:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 11006:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 55 previous similar messages Lustre: format at service.c:2372:ptlrpc_server_handle_request doesn't end in newline Lustre: 14751:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x8de:0x0] with magic=0xbd60bd0 Lustre: 14751:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: 31459:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0xa23:0x0] with magic=0xbd60bd0 Lustre: 31459:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: 11021:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 11021:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 93 previous similar messages Lustre: 11021:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 11021:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 93 previous similar messages Lustre: 11021:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 11021:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 93 previous similar messages Lustre: 11021:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 11021:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 93 previous similar messages Lustre: 11021:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 11021:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 93 previous similar messages Lustre: 11021:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 11021:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 93 previous similar messages Lustre: mdt00_001: service thread pid 10127 was inactive for 40.069 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 10127, comm: mdt00_001 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_getattr_name_lock+0xbd3/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Lustre: mdt01_000: service thread pid 10131 was inactive for 62.115 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 10131, comm: mdt01_000 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_reint_link+0x886/0xdd0 [mdt] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [<0>] mdt_reint_internal+0x76c/0xba0 [mdt] [<0>] mdt_reint+0x67/0x150 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Pid: 10140, comm: mdt03_002 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88028e123100/0xe192403b35d569a9 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0xd5e:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xe192403b35d5698d expref: 540 pid: 14706 timeout: 235 lvb_type: 0 LustreError: 9656:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693398756 with bad export cookie 16254124628182051317 LustreError: 11-0: lustre-MDT0000-mdc-ffff88029a8c4a88: operation mds_close to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88029a8c4a88: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 26000:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c4a88: inode [0x200000401:0xce2:0x0] mdc close failed: rc = -5 LustreError: 26428:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000402:0xcbe:0x0] error: rc = -108 LustreError: 26584:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88029a8c4a88: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection restored to (at 0@lo) Lustre: 15082:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 515 < left 618, rollback = 7 Lustre: 15082:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 11 previous similar messages Lustre: 15082:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 15082:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 15082:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 15082:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 15082:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 15082:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 15082:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 15082:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 15082:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 15082:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: format at ldlm_lockd.c:1538:ldlm_handle_enqueue doesn't end in newline Lustre: 17351:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x858:0x0] with magic=0xbd60bd0 Lustre: 17351:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: 14882:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0xe67:0x0] with magic=0xbd60bd0 Lustre: 14882:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: 10143:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x232f:0x0] with magic=0xbd60bd0 Lustre: 10143:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: 11006:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 11006:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 331 previous similar messages Lustre: 11006:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 11006:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 331 previous similar messages Lustre: 11006:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 11006:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 331 previous similar messages Lustre: 11006:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 11006:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 331 previous similar messages Lustre: 11006:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 11006:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 331 previous similar messages Lustre: 11006:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 11006:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 331 previous similar messages 14[8679]: segfault at 8 ip 00007f4e23e777e8 sp 00007ffc218b15f0 error 4 in ld-2.17.so[7f4e23e6c000+22000] LustreError: 15134:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c4a88: inode [0x200000403:0x1a50:0x0] mdc close failed: rc = -13 LustreError: 15134:0:(file.c:246:ll_close_inode_openhandle()) Skipped 15 previous similar messages 11[16857]: segfault at 4045bc ip 00000000004045bc sp 00007ffc9e3dff08 error 7 in 11[400000+6000] 5[28881]: segfault at 8 ip 00007fda45fa77e8 sp 00007fff057c0080 error 4 in ld-2.17.so[7fda45f9c000+22000] LustreError: 678:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c4a88: inode [0x200000403:0x2022:0x0] mdc close failed: rc = -13 Lustre: mdt05_005: service thread pid 20742 was inactive for 62.126 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: mdt05_002: service thread pid 10147 was inactive for 62.015 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Lustre: Skipped 1 previous similar message Pid: 10147, comm: mdt05_002 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_intent_getxattr+0x78/0x320 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Pid: 10155, comm: mdt07_002 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Pid: 17351, comm: mdt02_006 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88028f7ee1c0/0xe192403b362fb029 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x3211:0x0].0x0 bits 0x13/0x0 rrc: 16 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xe192403b362fafc7 expref: 1791 pid: 10145 timeout: 531 lvb_type: 0 LustreError: 11-0: lustre-MDT0000-mdc-ffff88029a8c5d28: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: 9649:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693399052 with bad export cookie 16254124628182051121 Lustre: lustre-MDT0000-mdc-ffff88029a8c5d28: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88029a8c5d28: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 18566:0:(llite_lib.c:1970:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 18807:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c5d28: inode [0x200000401:0x315f:0x0] mdc close failed: rc = -108 LustreError: 18566:0:(llite_lib.c:1970:ll_md_setattr()) Skipped 1 previous similar message LustreError: 18472:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 18472:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 14 previous similar messages LustreError: 18639:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88029a8c5d28: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 18639:0:(mdc_request.c:1465:mdc_read_page()) Skipped 9 previous similar messages LustreError: 18807:0:(ldlm_resource.c:1125:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff88029a8c5d28: namespace resource [0x200000007:0x1:0x0].0x0 (ffff88028f1d5e40) refcount nonzero (1) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff88029a8c5d28: Connection restored to (at 0@lo) 17[21027]: segfault at 8 ip 00007f52259997e8 sp 00007fffdce37470 error 4 in ld-2.17.so[7f522598e000+22000] Lustre: format at service.c:2323:ptlrpc_server_handle_request doesn't end in newline 8[29984]: segfault at 8 ip 00007f367a0487e8 sp 00007ffd521a8580 error 4 in ld-2.17.so[7f367a03d000+22000] Lustre: mdt06_005: service thread pid 20181 was inactive for 62.076 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 1 previous similar message LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff880286248400/0xe192403b3642349c lrc: 3/0,0 mode: CR/CR res: [0x200000403:0x2ac4:0x0].0x0 bits 0xa/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xe192403b36423456 expref: 376 pid: 10136 timeout: 661 lvb_type: 0 LustreError: 10154:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff880080eb8008 ns: mdt-lustre-MDT0000_UUID lock: ffff88008b2a4000/0xe192403b364238e0 lrc: 1/0,0 mode: --/PR res: [0x200000401:0x1:0x0].0x0 bits 0x13/0x0 rrc: 22 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0xe192403b364238d2 expref: 337 pid: 10154 timeout: 0 lvb_type: 0 LustreError: 11-0: lustre-MDT0000-mdc-ffff88029a8c5d28: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff88029a8c5d28: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88029a8c5d28: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 4071:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x1:0x0] error: rc = -5 LustreError: 4071:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 261 previous similar messages LustreError: 4133:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c5d28: inode [0x200000403:0x28fd:0x0] mdc close failed: rc = -108 LustreError: 3889:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000403:0x2ac4:0x0] error -108. LustreError: 4133:0:(file.c:246:ll_close_inode_openhandle()) Skipped 38 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029a8c5d28: Connection restored to (at 0@lo) Lustre: 17452:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 17452:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 323 previous similar messages Lustre: 17452:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 17452:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 323 previous similar messages Lustre: 17452:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 17452:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 323 previous similar messages Lustre: 17452:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 17452:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 323 previous similar messages Lustre: 17452:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 17452:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 323 previous similar messages Lustre: 17452:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 17452:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 323 previous similar messages Lustre: mdt04_003: service thread pid 14706 was inactive for 40.093 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 3 previous similar messages LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff880296902980/0xe192403b36454fbd lrc: 3/0,0 mode: PR/PR res: [0x200000403:0x2cd3:0x0].0x0 bits 0x1b/0x0 rrc: 23 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xe192403b36454fa8 expref: 1565 pid: 10136 timeout: 767 lvb_type: 0 LustreError: 14606:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff88028ee4dd28 ns: mdt-lustre-MDT0000_UUID lock: ffff8802752efc00/0xe192403b364554cc lrc: 1/0,0 mode: --/PR res: [0x200000403:0x2cd3:0x0].0x0 bits 0x1b/0x0 rrc: 22 type: IBT gid 0 flags: 0x54a01400000020 nid: 0@lo remote: 0xe192403b36455391 expref: 1527 pid: 14606 timeout: 0 lvb_type: 0 LustreError: 9657:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693399288 with bad export cookie 16254124628183968953 Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88029a8c4a88: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 14606:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: 9016:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000403:0x2cd3:0x0] error: rc = -5 LustreError: 9016:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 28 previous similar messages LustreError: 8864:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000403:0x2cd3:0x0] error -108. LustreError: 9415:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c4a88: inode [0x200000403:0x2cdd:0x0] mdc close failed: rc = -108 LustreError: 9415:0:(file.c:246:ll_close_inode_openhandle()) Skipped 28 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection restored to (at 0@lo) 19[11480]: segfault at 8 ip 00007f64cf8c17e8 sp 00007ffd2371be30 error 4 in ld-2.17.so[7f64cf8b6000+22000] Lustre: 19347:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0xe1f:0x0] with magic=0xbd60bd0 Lustre: 19347:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 3 previous similar messages 13[16015]: segfault at 8 ip 00007fbcfa2017e8 sp 00007fffb9407210 error 4 in ld-2.17.so[7fbcfa1f6000+22000] LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff880294b21300/0xe192403b367f562d lrc: 3/0,0 mode: PR/PR res: [0x200000406:0x16d3:0x0].0x0 bits 0x1b/0x0 rrc: 14 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xe192403b367f55cb expref: 930 pid: 28094 timeout: 948 lvb_type: 0 LustreError: 14706:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff88027ce0ae98 ns: mdt-lustre-MDT0000_UUID lock: ffff8802862bc000/0xe192403b367f6648 lrc: 3/0,0 mode: --/PR res: [0x200000406:0x16d3:0x0].0x0 bits 0x1b/0x0 rrc: 12 type: IBT gid 0 flags: 0x54a01400000020 nid: 0@lo remote: 0xe192403b367f6633 expref: 836 pid: 14706 timeout: 0 lvb_type: 0 LustreError: 9646:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693399468 with bad export cookie 16254124628191321592 Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 11-0: lustre-MDT0000-mdc-ffff88029a8c4a88: operation mds_reint to node 0@lo failed: rc = -107 LustreError: Skipped 7 previous similar messages LustreError: 167-0: lustre-MDT0000-mdc-ffff88029a8c4a88: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 23068:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000406:0x16d3:0x0] error: rc = -5 LustreError: 23068:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 12 previous similar messages LustreError: 23273:0:(llite_lib.c:1970:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 23273:0:(llite_lib.c:1970:ll_md_setattr()) Skipped 1 previous similar message LustreError: 23260:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c4a88: inode [0x200000405:0x1a74:0x0] mdc close failed: rc = -108 LustreError: 23260:0:(file.c:246:ll_close_inode_openhandle()) Skipped 19 previous similar messages LustreError: 14706:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection restored to (at 0@lo) Lustre: 14825:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0x1bd8:0x0] with magic=0xbd60bd0 Lustre: 14825:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 11 previous similar messages 13[15567]: segfault at 8 ip 00007f09f48827e8 sp 00007ffeaf7b58d0 error 4 in ld-2.17.so[7f09f4877000+22000] 17[25395]: segfault at 8 ip 00007f69cd2ff7e8 sp 00007ffd666df500 error 4 in ld-2.17.so[7f69cd2f4000+22000] 17[28733]: segfault at 8 ip 00007f027d7dc7e8 sp 00007ffc73855cb0 error 4 in ld-2.17.so[7f027d7d1000+22000] Lustre: format at client.c:742:ptlrpc_reassign_next_xid doesn't end in newline Lustre: format at mdc_locks.c:745:mdc_finish_enqueue doesn't end in newline Lustre: 10141:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000407:0x164a:0x0] with magic=0xbd60bd0 Lustre: 10141:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 17 previous similar messages LustreError: 6035:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c5d28: inode [0x200000407:0x21fe:0x0] mdc close failed: rc = -13 LustreError: 6035:0:(file.c:246:ll_close_inode_openhandle()) Skipped 34 previous similar messages 17[8094]: segfault at 8 ip 00007f0a050b27e8 sp 00007ffda32da5f0 error 4 in ld-2.17.so[7f0a050a7000+22000] 8[9778]: segfault at 8 ip 00007fd5029c17e8 sp 00007ffcc7243870 error 4 in ld-2.17.so[7fd5029b6000+22000] 7[12723]: segfault at 8 ip 00007ff6cb2ed7e8 sp 00007ffc92652d80 error 4 in ld-2.17.so[7ff6cb2e2000+22000] ptlrpc_watchdog_fire: 9 callbacks suppressed Lustre: mdt06_005: service thread pid 20181 was inactive for 40.144 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Lustre: Skipped 2 previous similar messages Pid: 20181, comm: mdt06_005 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88024c446940/0xe192403b371b0475 lrc: 3/0,0 mode: PR/PR res: [0x200000407:0x40b5:0x0].0x0 bits 0x1b/0x0 rrc: 12 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xe192403b371b0459 expref: 3059 pid: 20181 timeout: 1241 lvb_type: 0 LustreError: 10134:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff880078a037e8 ns: mdt-lustre-MDT0000_UUID lock: ffff880284845680/0xe192403b371b2362 lrc: 1/0,0 mode: --/PR res: [0x200000407:0x40b5:0x0].0x0 bits 0x1b/0x0 rrc: 11 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0xe192403b371b2354 expref: 2962 pid: 10134 timeout: 0 lvb_type: 0 LustreError: 10134:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: 11-0: lustre-MDT0000-mdc-ffff88029a8c5d28: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff88029a8c5d28: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88029a8c5d28: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 7295:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000407:0x40b5:0x0] error: rc = -5 LustreError: 7295:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 52 previous similar messages LustreError: 7600:0:(llite_lib.c:1970:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 7600:0:(llite_lib.c:1970:ll_md_setattr()) Skipped 2 previous similar messages LustreError: 7140:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000407:0x40b5:0x0] error -108. LustreError: Skipped 6 previous similar messages LustreError: 7675:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c5d28: inode [0x200000401:0x1:0x0] mdc close failed: rc = -108 LustreError: 7675:0:(file.c:246:ll_close_inode_openhandle()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff88029a8c5d28: Connection restored to (at 0@lo) Lustre: 4118:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 4118:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 1485 previous similar messages Lustre: 4118:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 4118:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1485 previous similar messages Lustre: 4118:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 4118:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1485 previous similar messages Lustre: 4118:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 4118:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1485 previous similar messages Lustre: 4118:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 4118:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1485 previous similar messages Lustre: 4118:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 4118:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1485 previous similar messages LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88028f0c6580/0xe192403b371e71dd lrc: 3/0,0 mode: CR/CR res: [0x200000408:0x173:0x0].0x0 bits 0xa/0x0 rrc: 6 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xe192403b371e71c8 expref: 243 pid: 9173 timeout: 1345 lvb_type: 0 LustreError: 30503:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802f4fae678 ns: mdt-lustre-MDT0000_UUID lock: ffff8800789316c0/0xe192403b371e839c lrc: 1/0,0 mode: --/PR res: [0x200000408:0x173:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0xe192403b371e835d expref: 192 pid: 30503 timeout: 0 lvb_type: 0 LustreError: 9656:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693399866 with bad export cookie 16254124628205319023 Lustre: lustre-MDT0000-mdc-ffff88029a8c5d28: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88029a8c5d28: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 12797:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000408:0x173:0x0] error -108. LustreError: 12797:0:(vvp_io.c:1879:vvp_io_init()) Skipped 1 previous similar message LustreError: 12910:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 12910:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 47 previous similar messages LustreError: 30503:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029a8c5d28: Connection restored to (at 0@lo) Lustre: 12354:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000409:0x98:0x0] with magic=0xbd60bd0 Lustre: 12354:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 9 previous similar messages Lustre: mdt04_006: service thread pid 12182 was inactive for 40.105 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 12182, comm: mdt04_006 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_object_stripes_lock+0x126/0x660 [mdt] [<0>] mdt_reint_setattr+0x7db/0x15f0 [mdt] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [<0>] mdt_reint_internal+0x76c/0xba0 [mdt] [<0>] mdt_reint+0x67/0x150 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff880259588f40/0xe192403b372e41c0 lrc: 3/0,0 mode: PR/PR res: [0x200000409:0x6e5:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xe192403b372e419d expref: 2127 pid: 10148 timeout: 1461 lvb_type: 0 LustreError: 9631:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693399982 with bad export cookie 16254124628195121507 LustreError: 11-0: lustre-MDT0000-mdc-ffff88029a8c4a88: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88029a8c4a88: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 2617:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88029a8c4a88: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 2617:0:(mdc_request.c:1465:mdc_read_page()) Skipped 9 previous similar messages LustreError: 2730:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 2730:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 170 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection restored to (at 0@lo) Lustre: mdt05_000: service thread pid 10144 was inactive for 40.094 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 10144, comm: mdt05_000 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88028e3b8040/0xe192403b373b7a47 lrc: 3/0,0 mode: PR/PR res: [0x20000040a:0x541:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xe192403b373b7a01 expref: 656 pid: 15850 timeout: 1578 lvb_type: 0 LustreError: 9647:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693400099 with bad export cookie 16254124628205538536 LustreError: 11-0: lustre-MDT0000-mdc-ffff88029a8c5d28: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff88029a8c5d28: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88029a8c5d28: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 19680:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x20000040a:0x541:0x0] error: rc = -5 LustreError: 19680:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 5 previous similar messages LustreError: 19879:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c5d28: inode [0x20000040a:0x1f6:0x0] mdc close failed: rc = -108 LustreError: 19879:0:(file.c:246:ll_close_inode_openhandle()) Skipped 114 previous similar messages LustreError: 19814:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88029a8c5d28: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 19814:0:(mdc_request.c:1465:mdc_read_page()) Skipped 29 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029a8c5d28: Connection restored to (at 0@lo) 17[30226]: segfault at 0 ip (null) sp 00007ffc73f09bf8 error 14 in 17[400000+6000] 5[30855]: segfault at 8 ip 00007f4a3225d7e8 sp 00007ffec0f13de0 error 4 in ld-2.17.so[7f4a32252000+22000] 19[15523]: segfault at 8 ip 00007fd2229c17e8 sp 00007fff1cf9dd70 error 4 in ld-2.17.so[7fd2229b6000+22000] Lustre: mdt07_002: service thread pid 10155 was inactive for 62.151 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 10155, comm: mdt07_002 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_check_lock+0xec/0x3c0 [mdt] [<0>] mdt_reint_rename+0x1fc4/0x24f0 [mdt] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [<0>] mdt_reint_internal+0x76c/0xba0 [mdt] [<0>] mdt_reint+0x67/0x150 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88028f4707c0/0xe192403b375f80ea lrc: 3/0,0 mode: CR/CR res: [0x20000040b:0x111c:0x0].0x0 bits 0xa/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xe192403b375f8042 expref: 1056 pid: 3343 timeout: 1717 lvb_type: 0 LustreError: 11-0: lustre-MDT0000-mdc-ffff88029a8c4a88: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: 9659:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693400238 with bad export cookie 16254124628206569069 Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 15569:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802f3ae2548 ns: mdt-lustre-MDT0000_UUID lock: ffff88025972cb40/0xe192403b375f8b47 lrc: 1/0,0 mode: --/PR res: [0x20000040b:0x111c:0x0].0x0 bits 0x1b/0x0 rrc: 3 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0xe192403b375f8b01 expref: 778 pid: 15569 timeout: 0 lvb_type: 0 LustreError: 167-0: lustre-MDT0000-mdc-ffff88029a8c4a88: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: Skipped 2 previous similar messages LustreError: 17046:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x20000040a:0x195e:0x0] error -5. LustreError: 17051:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x20000040a:0x195e:0x0] error: rc = -108 LustreError: 17051:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 46 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection restored to (at 0@lo) Lustre: 17351:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040c:0x182:0x0] with magic=0xbd60bd0 Lustre: 17351:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 11 previous similar messages 4[28356]: segfault at 8 ip 00007fe313f5f7e8 sp 00007ffcabf56b10 error 4 in ld-2.17.so[7fe313f54000+22000] 6[17263]: segfault at 8 ip 00007f850e1fa7e8 sp 00007ffc31e12fa0 error 4 in ld-2.17.so[7f850e1ef000+22000] Lustre: 7661:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 7661:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 1319 previous similar messages Lustre: 7661:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 7661:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1319 previous similar messages Lustre: 7661:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 7661:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1319 previous similar messages Lustre: 7661:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 7661:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1319 previous similar messages Lustre: 7661:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 7661:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1319 previous similar messages Lustre: 7661:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 7661:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1319 previous similar messages LustreError: 11-0: lustre-MDT0000-mdc-ffff88029a8c4a88: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: 5918:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x20000040c:0x2f73:0x0] error -5. LustreError: 5918:0:(vvp_io.c:1879:vvp_io_init()) Skipped 2 previous similar messages LustreError: 10147:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802f4488958 ns: mdt-lustre-MDT0000_UUID lock: ffff880274c6cf00/0xe192403b37d1411b lrc: 3/0,0 mode: PR/PR res: [0x20000040c:0x2f73:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x50200400000020 nid: 0@lo remote: 0xe192403b37d140c0 expref: 747 pid: 10147 timeout: 0 lvb_type: 0 LustreError: 6134:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88029a8c4a88: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 6134:0:(mdc_request.c:1465:mdc_read_page()) Skipped 19 previous similar messages Lustre: lustre-OST0003-osc-ffff88029a8c5d28: disconnect after 23s idle LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff880263a307c0/0xe192403b37e68706 lrc: 3/0,0 mode: CR/CR res: [0x20000040d:0x8a4:0x0].0x0 bits 0xa/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xe192403b37e686e3 expref: 288 pid: 15878 timeout: 2094 lvb_type: 0 LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) Skipped 1 previous similar message LustreError: 26473:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff88028e2b8958 ns: mdt-lustre-MDT0000_UUID lock: ffff88025c35a200/0xe192403b37e68c15 lrc: 1/0,0 mode: --/PR res: [0x20000040d:0x8a4:0x0].0x0 bits 0x1b/0x0 rrc: 3 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0xe192403b37e68beb expref: 258 pid: 26473 timeout: 0 lvb_type: 0 LustreError: 12928:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693400615 with bad export cookie 16254124628217250537 LustreError: 11-0: lustre-MDT0000-mdc-ffff88029a8c4a88: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 1 previous similar message LustreError: 167-0: lustre-MDT0000-mdc-ffff88029a8c4a88: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: Skipped 1 previous similar message LustreError: 4182:0:(llite_lib.c:1970:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 4182:0:(llite_lib.c:1970:ll_md_setattr()) Skipped 1 previous similar message LustreError: 4436:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x20000040d:0x8a4:0x0] error -108. LustreError: 4436:0:(vvp_io.c:1879:vvp_io_init()) Skipped 1 previous similar message LustreError: 4580:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c4a88: inode [0x20000040d:0x83d:0x0] mdc close failed: rc = -108 LustreError: 4580:0:(file.c:246:ll_close_inode_openhandle()) Skipped 126 previous similar messages LustreError: 4209:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 4209:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 16 previous similar messages LustreError: 4580:0:(ldlm_resource.c:1125:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff88029a8c4a88: namespace resource [0x200000401:0x1:0x0].0x0 (ffff88008af0a0c0) refcount nonzero (1) after lock cleanup; forcing cleanup. LustreError: 4580:0:(ldlm_resource.c:1125:ldlm_resource_complain()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection restored to (at 0@lo) Lustre: Skipped 1 previous similar message 3[17451]: segfault at 8 ip 00007f8a80e6f7e8 sp 00007ffd4635e770 error 4 in ld-2.17.so[7f8a80e64000+22000] 11[27466]: segfault at 8 ip 00007f5532c917e8 sp 00007ffe2cb20180 error 4 in ld-2.17.so[7f5532c86000+22000] 7[28934]: segfault at 8 ip 00007f26a57077e8 sp 00007ffe412cd2d0 error 4 in ld-2.17.so[7f26a56fc000+22000] 14[8651]: segfault at 8 ip 00007f46af3c27e8 sp 00007fff845c1420 error 4 in ld-2.17.so[7f46af3b7000+22000] 2[12451]: segfault at 8 ip 00007f4fbf3e87e8 sp 00007ffe2f2726a0 error 4 in ld-2.17.so[7f4fbf3dd000+22000] Lustre: mdt05_003: service thread pid 14606 was inactive for 40.026 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 14606, comm: mdt05_003 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Lustre: mdt05_000: service thread pid 10144 was inactive for 62.045 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 10144, comm: mdt05_000 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_getattr_name_lock+0xbd3/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 28081:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff880263a19bf8 ns: mdt-lustre-MDT0000_UUID lock: ffff88028955b100/0xe192403b38207bef lrc: 1/0,0 mode: --/PR res: [0x20000040e:0x16fd:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0xe192403b38207be1 expref: 730 pid: 28081 timeout: 0 lvb_type: 0 LustreError: 9641:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693400800 with bad export cookie 16254124628218660736 LustreError: 17802:0:(llite_lib.c:1970:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 16685:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88029a8c4a88: [0x20000040e:0x16fd:0x0] lock enqueue fails: rc = -108 LustreError: 16685:0:(mdc_request.c:1465:mdc_read_page()) Skipped 17 previous similar messages LustreError: 17642:0:(symlink.c:91:ll_readlink_internal()) lustre: inode [0x20000040b:0x642e:0x0]: rc = -108 1[18318]: segfault at 8 ip 00007fe366d317e8 sp 00007ffd03c94b60 error 4 in ld-2.17.so[7fe366d26000+22000] 19[28315]: segfault at 8 ip 00007efdfad657e8 sp 00007ffce3618f50 error 4 in ld-2.17.so[7efdfad5a000+22000] 5[29587]: segfault at 8 ip 00007f1fe11cb7e8 sp 00007fff2bead310 error 4 in ld-2.17.so[7f1fe11c0000+22000] Lustre: mdt07_003: service thread pid 14697 was inactive for 62.170 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 14697, comm: mdt07_003 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 11-0: lustre-MDT0000-mdc-ffff88029a8c4a88: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: Skipped 4 previous similar messages LustreError: 2601:0:(llite_lib.c:1970:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 2601:0:(llite_lib.c:1970:ll_md_setattr()) Skipped 1 previous similar message Lustre: 12354:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000410:0x415:0x0] with magic=0xbd60bd0 Lustre: 12354:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 43 previous similar messages Lustre: 11714:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 11714:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 885 previous similar messages Lustre: 11714:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 11714:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 885 previous similar messages Lustre: 11714:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 11714:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 885 previous similar messages Lustre: 11714:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 11714:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 885 previous similar messages Lustre: 11714:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 11714:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 885 previous similar messages Lustre: 11714:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 11714:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 885 previous similar messages LustreError: 27384:0:(llite_lib.c:1970:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 27384:0:(llite_lib.c:1970:ll_md_setattr()) Skipped 1 previous similar message LustreError: 27591:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88029a8c5d28: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 27591:0:(mdc_request.c:1465:mdc_read_page()) Skipped 21 previous similar messages LustreError: 27645:0:(ldlm_resource.c:1125:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff88029a8c5d28: namespace resource [0x200000007:0x1:0x0].0x0 (ffff880293f716c0) refcount nonzero (2) after lock cleanup; forcing cleanup. 12[6323]: segfault at 8 ip 00007f812da9e7e8 sp 00007fffb41dc250 error 4 in ld-2.17.so[7f812da93000+22000] 16[31476]: segfault at 8 ip 00007f6a7ae4b7e8 sp 00007ffee6d87040 error 4 in ld-2.17.so[7f6a7ae40000+22000] Lustre: mdt01_004: service thread pid 14879 was inactive for 40.001 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 14879, comm: mdt01_004 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Pid: 14825, comm: mdt01_003 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_intent_getxattr+0x78/0x320 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Pid: 24849, comm: mdt06_006 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_object_find_lock+0x54/0x170 [mdt] [<0>] mdt_reint_setxattr+0x231/0x1070 [mdt] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [<0>] mdt_reint_internal+0x76c/0xba0 [mdt] [<0>] mdt_reint+0x67/0x150 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff880289579300/0xe192403b387217df lrc: 3/0,0 mode: PR/PR res: [0x200000410:0x200e:0x0].0x0 bits 0x13/0x0 rrc: 10 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xe192403b387217a0 expref: 880 pid: 14879 timeout: 2690 lvb_type: 0 LustreError: 9664:0:(ldlm_lockd.c:261:expired_lock_main()) Skipped 3 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 9631:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693401210 with bad export cookie 16254124628223206396 Lustre: Skipped 3 previous similar messages LustreError: 167-0: lustre-MDT0000-mdc-ffff88029a8c4a88: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: Skipped 3 previous similar messages LustreError: 7210:0:(llite_lib.c:1970:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 7182:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000411:0xddb:0x0] error: rc = -108 LustreError: 7182:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 388 previous similar messages LustreError: 7382:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88029a8c4a88: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 7382:0:(mdc_request.c:1465:mdc_read_page()) Skipped 9 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029a8c4a88: Connection restored to (at 0@lo) Lustre: Skipped 3 previous similar messages LustreError: 19575:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029a8c5d28: inode [0x200000411:0x1b9c:0x0] mdc close failed: rc = -13 LustreError: 19575:0:(file.c:246:ll_close_inode_openhandle()) Skipped 88 previous similar messages | Link to test |
racer test 1: racer on clients: centos-95.localnet DURATION=2700 | LustreError: 25827:0:(lod_object.c:5136:lod_xattr_set()) ASSERTION( (!!(!strcmp(name, "lustre.""lov") || !strcmp(name, "trusted.lov")) == !!(!lod_dt_obj(dt)->ldo_comp_cached)) ) failed: LustreError: 25827:0:(lod_object.c:5136:lod_xattr_set()) LBUG Pid: 25827, comm: mdt_rdpg05_000 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] libcfs_call_trace+0x90/0xf0 [libcfs] [<0>] lbug_with_loc+0x4c/0xa0 [libcfs] [<0>] lod_xattr_set+0x1b13/0x1c90 [lod] [<0>] mdo_xattr_set+0xc0/0x4c0 [mdd] [<0>] mdd_xattr_set+0xf85/0x1200 [mdd] [<0>] mo_xattr_set+0x43/0x45 [mdt] [<0>] mdt_close_handle_layouts+0x9a4/0xee0 [mdt] [<0>] mdt_mfd_close+0x5b2/0xbb0 [mdt] [<0>] mdt_close_internal+0xb4/0x240 [mdt] [<0>] mdt_close+0x28c/0x970 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe | Lustre: 25811:0:(mdt_recovery.c:149:mdt_req_from_lrd()) @@@ restoring transno req@ffff8802c65a6840 x1775241149006848/t4294967749(0) o101->32ba0428-fc6f-4c03-ba1d-a278520c7ab6@0@lo:500/0 lens 376/864 e 0 to 0 dl 1693001930 ref 1 fl Interpret:H/202/0 rc 0/0 job:'dd.0' uid:0 gid:0 Lustre: 26157:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 26157:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 26157:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 26157:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 26157:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 26157:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 28233:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 28233:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 3 previous similar messages Lustre: 28233:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 28233:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 28233:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 28233:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 28233:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 28233:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 28233:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 28233:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 28233:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 28233:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 29029:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 29029:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 7 previous similar messages Lustre: 29029:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 29029:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 29029:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 29029:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 29029:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 29029:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 29029:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 29029:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 29029:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 29029:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 26151:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 26151:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 5 previous similar messages Lustre: 26151:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 26151:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 26151:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 26151:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 26151:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 26151:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 26151:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 26151:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 26151:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 26151:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 25799:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0x449:0x0] with magic=0xbd60bd0 Lustre: 26151:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 26151:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 27 previous similar messages Lustre: 26151:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 26151:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 27 previous similar messages Lustre: 26151:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 26151:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 27 previous similar messages Lustre: 26151:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 26151:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 27 previous similar messages Lustre: 26151:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 26151:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 27 previous similar messages Lustre: 26151:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 26151:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 27 previous similar messages Lustre: 26135:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 514 < left 618, rollback = 7 Lustre: 26135:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 13 previous similar messages Lustre: 26135:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 26135:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 13 previous similar messages Lustre: 26135:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 26135:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 13 previous similar messages Lustre: 26135:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 26135:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 13 previous similar messages Lustre: 26135:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 26135:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 13 previous similar messages Lustre: 26135:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 26135:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 13 previous similar messages Lustre: mdt00_005: service thread pid 1357 was inactive for 40.079 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 1357, comm: mdt00_005 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_reint_link+0x7dc/0xd10 [mdt] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [<0>] mdt_reint_internal+0x76c/0xba0 [mdt] [<0>] mdt_reint+0x67/0x150 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Lustre: mdt01_000: service thread pid 25796 was inactive for 62.236 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 25796, comm: mdt01_000 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_reint_link+0x7dc/0xd10 [mdt] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [<0>] mdt_reint_internal+0x76c/0xba0 [mdt] [<0>] mdt_reint+0x67/0x150 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 25786:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88024067d2c0/0x2022befa1187d036 lrc: 3/0,0 mode: CR/CR res: [0x200000401:0x5af:0x0].0x0 bits 0xa/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x2022befa1187d028 expref: 283 pid: 25793 timeout: 49002 lvb_type: 0 LustreError: 11-0: lustre-MDT0000-mdc-ffff88008550a548: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff88008550a548: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88008550a548: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 13628:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88008550a548: [0x200000401:0x1:0x0] lock enqueue fails: rc = -5 LustreError: 13831:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88008550a548: inode [0x200000402:0x2be:0x0] mdc close failed: rc = -108 LustreError: 13624:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000401:0x5af:0x0] error -108. LustreError: 13624:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x5af:0x0] error: rc = -108 LustreError: 13831:0:(ldlm_resource.c:1125:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff88008550a548: namespace resource [0x200000007:0x1:0x0].0x0 (ffff8802a7f14540) refcount nonzero (2) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff88008550a548: Connection restored to (at 0@lo) Lustre: 26146:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 546, rollback = 7 Lustre: 26146:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 5 previous similar messages Lustre: 26146:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 26146:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 26146:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 26146:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 26146:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/546/0, punch: 0/0/0, quota 4/150/0 Lustre: 26146:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 26146:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 26146:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 26146:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 26146:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 27842:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0x6fe:0x0] with magic=0xbd60bd0 Lustre: 27842:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message LustreError: 25511:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88008550a548: inode [0x200000403:0x361:0x0] mdc close failed: rc = -13 LustreError: 25511:0:(file.c:246:ll_close_inode_openhandle()) Skipped 25 previous similar messages Lustre: 25807:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x4f2:0x0] with magic=0xbd60bd0 Lustre: 25807:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: 28381:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0xe5a:0x0] with magic=0xbd60bd0 Lustre: 28381:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: 26148:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 514 < left 618, rollback = 7 Lustre: 26148:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 145 previous similar messages Lustre: 26148:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 26148:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 145 previous similar messages Lustre: 26148:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 26148:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 145 previous similar messages Lustre: 26148:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 26148:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 145 previous similar messages Lustre: 26148:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 26148:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 145 previous similar messages Lustre: 26148:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 26148:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 145 previous similar messages Lustre: mdt02_001: service thread pid 25800 was inactive for 40.053 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: mdt02_003: service thread pid 27816 was inactive for 40.050 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 27816, comm: mdt02_003 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Lustre: Skipped 4 previous similar messages Lustre: mdt03_005: service thread pid 2645 was inactive for 62.004 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 1 previous similar message LustreError: 25786:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88025d86b100/0x2022befa119afa09 lrc: 3/0,0 mode: PR/PR res: [0x200000402:0xeea:0x0].0x0 bits 0x1b/0x0 rrc: 14 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x2022befa119af9f4 expref: 532 pid: 27883 timeout: 49123 lvb_type: 0 LustreError: 28450:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff880320a8a548 ns: mdt-lustre-MDT0000_UUID lock: ffff8802ef2ef840/0x2022befa119b0d65 lrc: 1/0,0 mode: --/PR res: [0x200000402:0xeea:0x0].0x0 bits 0x20/0x0 rrc: 11 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0x2022befa119b0d50 expref: 333 pid: 28450 timeout: 0 lvb_type: 0 LustreError: 25781:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693002155 with bad export cookie 2315623139666355091 Lustre: lustre-MDT0000-mdc-ffff8802c79c5d28: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 11-0: lustre-MDT0000-mdc-ffff8802c79c5d28: operation mds_reint to node 0@lo failed: rc = -107 LustreError: 167-0: lustre-MDT0000-mdc-ffff8802c79c5d28: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 10207:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000402:0xeea:0x0] error: rc = -5 LustreError: 10207:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 75 previous similar messages LustreError: 10333:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000402:0xf03:0x0] error -108. LustreError: 10326:0:(llite_lib.c:1933:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 10482:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff8802c79c5d28: inode [0x200000402:0xb7e:0x0] mdc close failed: rc = -108 Lustre: lustre-MDT0000-mdc-ffff8802c79c5d28: Connection restored to (at 0@lo) Lustre: 28955:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 28955:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 5 previous similar messages Lustre: 28955:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 28955:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 28955:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 28955:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 28955:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 28955:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 28955:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 28955:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 28955:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 28955:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 2451:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x884:0x0] with magic=0xbd60bd0 Lustre: 2451:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: mdt05_006: service thread pid 635 was inactive for 40.053 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 2 previous similar messages LustreError: 25786:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88009b5bb880/0x2022befa11c16a81 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x12aa:0x0].0x0 bits 0x1b/0x0 rrc: 13 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x2022befa11c16a5e expref: 643 pid: 28260 timeout: 49269 lvb_type: 0 LustreError: 25813:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff88025dbf6678 ns: mdt-lustre-MDT0000_UUID lock: ffff8802ddd59a80/0x2022befa11c17c01 lrc: 1/0,0 mode: --/PR res: [0x200000404:0x12aa:0x0].0x0 bits 0x20/0x0 rrc: 10 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0x2022befa11c17be5 expref: 532 pid: 25813 timeout: 0 lvb_type: 0 LustreError: 25768:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693002302 with bad export cookie 2315623139668333291 Lustre: lustre-MDT0000-mdc-ffff8802c79c5d28: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 11-0: lustre-MDT0000-mdc-ffff8802c79c5d28: operation mds_reint to node 0@lo failed: rc = -107 LustreError: 25813:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: 167-0: lustre-MDT0000-mdc-ffff8802c79c5d28: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 5079:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000404:0x12aa:0x0] error: rc = -5 LustreError: 5079:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 60 previous similar messages LustreError: 4401:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff8802c79c5d28: inode [0x200000404:0x12aa:0x0] mdc close failed: rc = -5 LustreError: 4401:0:(file.c:246:ll_close_inode_openhandle()) Skipped 14 previous similar messages LustreError: 4684:0:(llite_lib.c:1933:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 4684:0:(llite_lib.c:1933:ll_md_setattr()) Skipped 2 previous similar messages LustreError: 5133:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000404:0x12fa:0x0] error -108. LustreError: 5133:0:(vvp_io.c:1879:vvp_io_init()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff8802c79c5d28: Connection restored to (at 0@lo) Lustre: 28143:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 28143:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 305 previous similar messages Lustre: 28143:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 28143:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 305 previous similar messages Lustre: 28143:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 28143:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 305 previous similar messages Lustre: 28143:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 28143:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 305 previous similar messages Lustre: 28143:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 28143:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 305 previous similar messages Lustre: 28143:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 28143:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 305 previous similar messages LustreError: 25786:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88029cd2d680/0x2022befa11ca6146 lrc: 3/0,0 mode: PR/PR res: [0x200000405:0x397:0x0].0x0 bits 0x1b/0x0 rrc: 16 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x2022befa11ca6115 expref: 314 pid: 28260 timeout: 49381 lvb_type: 0 LustreError: 2766:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff880298dd0958 ns: mdt-lustre-MDT0000_UUID lock: ffff8802a2ca3880/0x2022befa11ca6f62 lrc: 1/0,0 mode: --/PR res: [0x200000405:0x397:0x0].0x0 bits 0x20/0x0 rrc: 11 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0x2022befa11ca6f4d expref: 234 pid: 2766 timeout: 0 lvb_type: 0 LustreError: 27911:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693002413 with bad export cookie 2315623139670874179 Lustre: lustre-MDT0000-mdc-ffff8802c79c5d28: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff8802c79c5d28: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 18480:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000405:0x397:0x0] error: rc = -5 LustreError: 18480:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 15 previous similar messages LustreError: 18640:0:(llite_lib.c:1933:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 18748:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff8802c79c5d28: inode [0x200000405:0x33b:0x0] mdc close failed: rc = -108 LustreError: 18748:0:(file.c:246:ll_close_inode_openhandle()) Skipped 21 previous similar messages LustreError: 2766:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages Lustre: lustre-MDT0000-mdc-ffff8802c79c5d28: Connection restored to (at 0@lo) LustreError: 25786:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802a8198b80/0x2022befa11ce40fa lrc: 3/0,0 mode: PR/PR res: [0x200000406:0x1ad:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x2022befa11ce40d0 expref: 1329 pid: 1357 timeout: 49485 lvb_type: 0 LustreError: 25802:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff88029d7e37e8 ns: mdt-lustre-MDT0000_UUID lock: ffff8802a1d10f40/0x2022befa11ce4f47 lrc: 1/0,0 mode: --/PR res: [0x200000406:0x1ad:0x0].0x0 bits 0x1b/0x0 rrc: 3 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0x2022befa11ce4f2b expref: 1210 pid: 25802 timeout: 0 lvb_type: 0 LustreError: 25767:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693002517 with bad export cookie 2315623139667072556 Lustre: lustre-MDT0000-mdc-ffff88008550a548: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88008550a548: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 24139:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88008550a548: inode [0x200000405:0x33b:0x0] mdc close failed: rc = -108 LustreError: 24139:0:(file.c:246:ll_close_inode_openhandle()) Skipped 30 previous similar messages LustreError: 25802:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: 24200:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 24200:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 47 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88008550a548: Connection restored to (at 0@lo) Lustre: 2489:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000406:0x23f:0x0] with magic=0xbd60bd0 Lustre: 2489:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 3 previous similar messages LustreError: 17039:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88008550a548: inode [0x200000407:0x716:0x0] mdc close failed: rc = -13 LustreError: 17039:0:(file.c:246:ll_close_inode_openhandle()) Skipped 65 previous similar messages Lustre: format at service.c:2372:ptlrpc_server_handle_request doesn't end in newline Lustre: 27914:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000407:0xc4d:0x0] with magic=0xbd60bd0 Lustre: 27914:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message ptlrpc_watchdog_fire: 10 callbacks suppressed Lustre: mdt05_000: service thread pid 25808 was inactive for 40.019 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: mdt05_005: service thread pid 2489 was inactive for 40.019 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 2489, comm: mdt05_005 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_getattr_name_lock+0xbd3/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Lustre: mdt00_005: service thread pid 1357 was inactive for 62.202 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 1357, comm: mdt00_005 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_check_lock+0xec/0x3c0 [mdt] [<0>] mdt_reint_rename+0x1fc4/0x24f0 [mdt] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [<0>] mdt_reint_internal+0x76c/0xba0 [mdt] [<0>] mdt_reint+0x67/0x150 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Pid: 27842, comm: mdt00_003 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Lustre: mdt00_001: service thread pid 25794 was inactive for 62.031 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_getattr_name_lock+0xbd3/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 25786:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8800932ed680/0x2022befa11f4e3d7 lrc: 3/0,0 mode: PR/PR res: [0x200000407:0x1146:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x2022befa11f4e3bb expref: 688 pid: 25804 timeout: 49625 lvb_type: 0 LustreError: 25794:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802c78db7e8 ns: mdt-lustre-MDT0000_UUID lock: ffff8800abe4b4c0/0x2022befa11f4e781 lrc: 1/0,0 mode: --/PR res: [0x200000401:0x1:0x0].0x0 bits 0x13/0x0 rrc: 16 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0x2022befa11f4e773 expref: 624 pid: 25794 timeout: 0 lvb_type: 0 LustreError: 11-0: lustre-MDT0000-mdc-ffff8802c79c5d28: operation mds_reint to node 0@lo failed: rc = -107 LustreError: 25774:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693002657 with bad export cookie 2315623139671439814 LustreError: Skipped 9 previous similar messages Lustre: lustre-MDT0000-mdc-ffff8802c79c5d28: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff8802c79c5d28: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 14856:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x1:0x0] error: rc = -5 LustreError: 14856:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 61 previous similar messages LustreError: 14723:0:(llite_lib.c:1933:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 14723:0:(llite_lib.c:1933:ll_md_setattr()) Skipped 1 previous similar message LustreError: 14905:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff8802c79c5d28: inode [0x200000407:0xf19:0x0] mdc close failed: rc = -108 Lustre: lustre-MDT0000-mdc-ffff8802c79c5d28: Connection restored to (at 0@lo) Lustre: 26145:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 26145:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 323 previous similar messages Lustre: 26145:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 26145:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 323 previous similar messages Lustre: 26145:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 26145:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 323 previous similar messages Lustre: 26145:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 26145:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 323 previous similar messages Lustre: 26145:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 26145:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 323 previous similar messages Lustre: 26145:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 26145:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 323 previous similar messages Lustre: 25795:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000407:0x1308:0x0] with magic=0xbd60bd0 Lustre: 25795:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 5 previous similar messages 12[23966]: segfault at 0 ip (null) sp 00007ffd5a6e0a78 error 14 in 12[400000+6000] 14[1916]: segfault at 8 ip 00007f5b4b9d07e8 sp 00007ffcc1648ce0 error 4 in ld-2.17.so[7f5b4b9c5000+22000] 7[12877]: segfault at 8 ip 00007f684ebcd7e8 sp 00007fffbe623080 error 4 in ld-2.17.so[7f684ebc2000+22000] Lustre: mdt02_001: service thread pid 25800 was inactive for 62.149 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. LustreError: 25786:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802c2ba8b80/0x2022befa120ebdb6 lrc: 3/0,0 mode: CR/CR res: [0x200000408:0xca5:0x0].0x0 bits 0xa/0x0 rrc: 15 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x2022befa120ebd9a expref: 836 pid: 25796 timeout: 49761 lvb_type: 0 LustreError: 25802:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802a5f06fc8 ns: mdt-lustre-MDT0000_UUID lock: ffff880227aefc00/0x2022befa120ed3ea lrc: 1/0,0 mode: --/PR res: [0x200000408:0xca5:0x0].0x0 bits 0x1b/0x0 rrc: 12 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0x2022befa120ed36c expref: 395 pid: 25802 timeout: 0 lvb_type: 0 LustreError: 25767:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693002794 with bad export cookie 2315623139671698569 Lustre: lustre-MDT0000-mdc-ffff88008550a548: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88008550a548: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 25802:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: 20684:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88008550a548: inode [0x200000407:0x1c5c:0x0] mdc close failed: rc = -108 LustreError: 20684:0:(file.c:246:ll_close_inode_openhandle()) Skipped 19 previous similar messages LustreError: 20126:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000408:0xca5:0x0] error -108. LustreError: 20126:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000408:0xca5:0x0] error: rc = -108 LustreError: 20126:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 37 previous similar messages LustreError: 20421:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88008550a548: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 20421:0:(mdc_request.c:1465:mdc_read_page()) Skipped 17 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88008550a548: Connection restored to (at 0@lo) 9[5998]: segfault at 8 ip 00007ff9b1cd67e8 sp 00007fff0472c390 error 4 in ld-2.17.so[7ff9b1ccb000+22000] Lustre: 25800:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000409:0x9ff:0x0] with magic=0xbd60bd0 Lustre: 25800:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: format at client.c:2228:ptlrpc_check_set doesn't end in newline 4[10807]: segfault at 8 ip 00007f46736287e8 sp 00007fffb3fd65b0 error 4 in ld-2.17.so[7f467361d000+22000] LustreError: 25786:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88027c6ba200/0x2022befa123fdce9 lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x223c:0x0].0x0 bits 0x1b/0x0 rrc: 10 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x2022befa123fdcbf expref: 737 pid: 29032 timeout: 49947 lvb_type: 0 LustreError: 11-0: lustre-MDT0000-mdc-ffff88008550a548: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: 25771:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693002979 with bad export cookie 2315623139675933352 Lustre: lustre-MDT0000-mdc-ffff88008550a548: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88008550a548: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 23967:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000408:0x223c:0x0] error: rc = -107 LustreError: 23967:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 13 previous similar messages LustreError: 24221:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88008550a548: inode [0x200000401:0x1:0x0] mdc close failed: rc = -108 LustreError: 24221:0:(file.c:246:ll_close_inode_openhandle()) Skipped 19 previous similar messages LustreError: 24221:0:(ldlm_resource.c:1125:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff88008550a548: namespace resource [0x200000007:0x1:0x0].0x0 (ffff8802b75282c0) refcount nonzero (1) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff88008550a548: Connection restored to (at 0@lo) LustreError: 25786:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff880241148f40/0x2022befa1240e7de lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x22b4:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x2022befa1240e7ad expref: 118 pid: 2766 timeout: 50048 lvb_type: 0 LustreError: 11-0: lustre-MDT0000-mdc-ffff88008550a548: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff88008550a548: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88008550a548: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 25549:0:(llite_lib.c:1933:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 25620:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000408:0x22b4:0x0] error: rc = -108 LustreError: 25620:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 60 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88008550a548: Connection restored to (at 0@lo) Lustre: 2640:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040b:0x1b2:0x0] with magic=0xbd60bd0 Lustre: 2640:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 7 previous similar messages 4[21734]: segfault at 0 ip (null) sp 00007ffe4b55ac48 error 14 in 4[400000+6000] ptlrpc_watchdog_fire: 2 callbacks suppressed Lustre: mdt05_002: service thread pid 25810 was inactive for 62.221 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Lustre: Skipped 1 previous similar message Pid: 25810, comm: mdt05_002 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_getattr_name_lock+0xbd3/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 25786:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802e54507c0/0x2022befa126b23ea lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x367b:0x0].0x0 bits 0x13/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x2022befa126b2388 expref: 787 pid: 25799 timeout: 50192 lvb_type: 0 LustreError: 11-0: lustre-MDT0000-mdc-ffff88008550a548: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff88008550a548: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88008550a548: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 26957:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000408:0x367b:0x0] error -108. LustreError: 27105:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 27105:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 6 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88008550a548: Connection restored to (at 0@lo) Lustre: 26135:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 26135:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 759 previous similar messages Lustre: 26135:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 26135:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 759 previous similar messages Lustre: 26135:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 26135:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 759 previous similar messages Lustre: 26135:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 26135:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 759 previous similar messages Lustre: 26135:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 26135:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 759 previous similar messages Lustre: 26135:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 26135:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 759 previous similar messages 10[8255]: segfault at 8 ip 00007efdce0dc7e8 sp 00007ffcb3370110 error 4 in ld-2.17.so[7efdce0d1000+22000] LustreError: 18637:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff8802c79c5d28: inode [0x20000040c:0x761:0x0] mdc close failed: rc = -13 LustreError: 18637:0:(file.c:246:ll_close_inode_openhandle()) Skipped 109 previous similar messages LustreError: 27852:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802cb842e98 ns: mdt-lustre-MDT0000_UUID lock: ffff8802ed4ab880/0x2022befa1280370e lrc: 1/0,0 mode: --/PR res: [0x200000408:0x404b:0x0].0x0 bits 0x1b/0x0 rrc: 3 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0x2022befa12803700 expref: 449 pid: 27852 timeout: 0 lvb_type: 0 LustreError: 25761:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693003343 with bad export cookie 2315623139681971867 LustreError: 11-0: lustre-MDT0000-mdc-ffff88008550a548: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: 26686:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88008550a548: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 26686:0:(mdc_request.c:1465:mdc_read_page()) Skipped 21 previous similar messages 7[11129]: segfault at 8 ip 00007f4e3d42d7e8 sp 00007ffd510efea0 error 4 in ld-2.17.so[7f4e3d422000+22000] Lustre: 27989:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040d:0x749:0x0] with magic=0xbd60bd0 Lustre: 27989:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 9 previous similar messages LustreError: 1357:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff88031eb14138 ns: mdt-lustre-MDT0000_UUID lock: ffff8802d4b4ed00/0x2022befa12a2352d lrc: 1/0,0 mode: --/PR res: [0x200000408:0x54de:0x0].0x0 bits 0x13/0x0 rrc: 10 type: IBT gid 0 flags: 0x54a01400000020 nid: 0@lo remote: 0x2022befa12a2350a expref: 983 pid: 1357 timeout: 0 lvb_type: 0 LustreError: 25773:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693003456 with bad export cookie 2315623139683350720 LustreError: 11-0: lustre-MDT0000-mdc-ffff88008550a548: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: Skipped 1 previous similar message LustreError: 8955:0:(llite_lib.c:1933:ll_md_setattr()) md_setattr fails: rc = -108 19[9691]: segfault at 8 ip 00007faf9e6ef7e8 sp 00007ffc800c8890 error 4 in ld-2.17.so[7faf9e6e4000+22000] 19[9679]: segfault at 8 ip 00007ff035e667e8 sp 00007ffd3e556930 error 4 in ld-2.17.so[7ff035e5b000+22000] 8[5716]: segfault at 8 ip 00007fe18755e7e8 sp 00007fff6893b000 error 4 in ld-2.17.so[7fe187553000+22000] 2[7474]: segfault at 8 ip 00007f520bb0a7e8 sp 00007fffa344b1f0 error 4 in ld-2.17.so[7f520baff000+22000] 4[20982]: segfault at 8 ip 00007f34c0d247e8 sp 00007fff20e2f150 error 4 in ld-2.17.so[7f34c0d19000+22000] LustreError: 25786:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802d633a980/0x2022befa12f05ec9 lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x78c8:0x0].0x0 bits 0x13/0x0 rrc: 16 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x2022befa12f05ead expref: 3713 pid: 2836 timeout: 50655 lvb_type: 0 LustreError: 25786:0:(ldlm_lockd.c:261:expired_lock_main()) Skipped 2 previous similar messages LustreError: 2489:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802ce132e98 ns: mdt-lustre-MDT0000_UUID lock: ffff88024c3acb40/0x2022befa12f0821d lrc: 1/0,0 mode: --/PR res: [0x200000408:0x78c8:0x0].0x0 bits 0x20/0x0 rrc: 15 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0x2022befa12f0820f expref: 3564 pid: 2489 timeout: 0 lvb_type: 0 LustreError: 25766:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1693003687 with bad export cookie 2315623139674220963 Lustre: lustre-MDT0000-mdc-ffff8802c79c5d28: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 2 previous similar messages LustreError: 11-0: lustre-MDT0000-mdc-ffff8802c79c5d28: operation mds_reint to node 0@lo failed: rc = -107 LustreError: 167-0: lustre-MDT0000-mdc-ffff8802c79c5d28: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: Skipped 2 previous similar messages LustreError: 5236:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000408:0x78c8:0x0] error: rc = -5 LustreError: 5236:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 237 previous similar messages LustreError: 5375:0:(llite_lib.c:1933:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 5087:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000408:0x78b9:0x0] error -108. Lustre: lustre-MDT0000-mdc-ffff8802c79c5d28: Connection restored to (at 0@lo) Lustre: Skipped 2 previous similar messages 8[10622]: segfault at 8 ip 00007fb688b777e8 sp 00007ffcacda7440 error 4 in ld-2.17.so[7fb688b6c000+22000] 14[4940]: segfault at 0 ip (null) sp 00007ffc9769fe58 error 14 in 14[400000+6000] LustreError: 27989:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802aad792a8 ns: mdt-lustre-MDT0000_UUID lock: ffff8802d1424b40/0x2022befa130ed93e lrc: 1/0,0 mode: --/PR res: [0x20000040f:0xd28:0x0].0x0 bits 0x1b/0x0 rrc: 2 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0x2022befa130ed914 expref: 1692 pid: 27989 timeout: 0 lvb_type: 0 LustreError: 15599:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88008550a548: inode [0x20000040f:0xd28:0x0] mdc close failed: rc = -5 LustreError: 15599:0:(file.c:246:ll_close_inode_openhandle()) Skipped 87 previous similar messages LustreError: 27989:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 14 previous similar messages LustreError: 15745:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88008550a548: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 15745:0:(mdc_request.c:1465:mdc_read_page()) Skipped 29 previous similar messages Lustre: 30955:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 30955:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 1435 previous similar messages Lustre: 30955:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 30955:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1435 previous similar messages Lustre: 30955:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 30955:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1435 previous similar messages Lustre: 30955:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 30955:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1435 previous similar messages Lustre: 30955:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 30955:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1435 previous similar messages Lustre: 30955:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 30955:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1435 previous similar messages 14[16298]: segfault at 8 ip 00007f245c3f27e8 sp 00007ffee3235bc0 error 4 in ld-2.17.so[7f245c3e7000+22000] 2[22661]: segfault at 8 ip 00007f170217c7e8 sp 00007ffd1b29a7e0 error 4 in ld-2.17.so[7f1702171000+22000] 13[1667]: segfault at 0 ip (null) sp 00007ffd0b3975e8 error 14 in 13[400000+6000] Lustre: 27842:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040f:0x139d:0x0] with magic=0xbd60bd0 Lustre: 27842:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 39 previous similar messages LustreError: 11-0: lustre-MDT0000-mdc-ffff88008550a548: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: Skipped 8 previous similar messages LustreError: 844:0:(llite_lib.c:1933:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 844:0:(llite_lib.c:1933:ll_md_setattr()) Skipped 1 previous similar message LustreError: 630:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000410:0xf2a:0x0] error -108. Lustre: format at ldlm_lock.c:676:ldlm_add_bl_work_item doesn't end in newline 4[25962]: segfault at 8 ip 00007f88c20497e8 sp 00007ffe452bcc50 error 4 in ld-2.17.so[7f88c203e000+22000] 0[10545]: segfault at 8 ip 00007fd20d7447e8 sp 00007ffddcb34170 error 4 in ld-2.17.so[7fd20d739000+22000] 13[10668]: segfault at 8 ip 00007f1c9e9b07e8 sp 00007ffdbbf1b410 error 4 in ld-2.17.so[7f1c9e9a5000+22000] 12[27152]: segfault at 8 ip 00007f5870cb67e8 sp 00007ffcda46e780 error 4 in ld-2.17.so[7f5870cab000+22000] 4[1855]: segfault at 0 ip (null) sp 00007ffc8a49b538 error 14 in 4[400000+6000] 9[6866]: segfault at 0 ip (null) sp 00007ffd2e156c48 error 14 in 9[400000+6000] 4[11954]: segfault at 8 ip 00007f52ab2e37e8 sp 00007fffde2a6ed0 error 4 in ld-2.17.so[7f52ab2d8000+22000] 7[27004]: segfault at 8 ip 00007ff247a847e8 sp 00007ffddd055d10 error 4 in ld-2.17.so[7ff247a79000+22000] | Link to test |
racer test 1: racer on clients: centos-65.localnet DURATION=2700 | LustreError: 4168:0:(lod_object.c:5136:lod_xattr_set()) ASSERTION( (!!(!strcmp(name, "lustre.""lov") || !strcmp(name, "trusted.lov")) == !!(!lod_dt_obj(dt)->ldo_comp_cached)) ) failed: LustreError: 4168:0:(lod_object.c:5136:lod_xattr_set()) LBUG Pid: 4168, comm: mdt_rdpg01_003 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] libcfs_call_trace+0x90/0xf0 [libcfs] [<0>] lbug_with_loc+0x4c/0xa0 [libcfs] [<0>] lod_xattr_set+0x1b13/0x1c90 [lod] [<0>] mdo_xattr_set+0xc0/0x4c0 [mdd] [<0>] mdd_xattr_set+0xf85/0x1200 [mdd] [<0>] mo_xattr_set+0x43/0x45 [mdt] [<0>] mdt_close_handle_layouts+0x9a4/0xee0 [mdt] [<0>] mdt_mfd_close+0x5b2/0xbb0 [mdt] [<0>] mdt_close_internal+0xb4/0x240 [mdt] [<0>] mdt_close+0x28c/0x970 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe | Lustre: 30153:0:(mdt_recovery.c:149:mdt_req_from_lrd()) @@@ restoring transno req@ffff8802993ba0c0 x1775230290887488/t4294967613(0) o101->a34c44be-d628-4b49-9bcf-f52c0341c80c@0@lo:711/0 lens 376/864 e 0 to 0 dl 1692991571 ref 1 fl Interpret:H/202/0 rc 0/0 job:'dd.0' uid:0 gid:0 Lustre: 28237:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 28237:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 28237:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 28237:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 28237:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 28237:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 LustreError: 888:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029918ca88: inode [0x200000401:0xfa:0x0] mdc close failed: rc = -13 Lustre: 28243:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 28243:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 28243:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 28243:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28243:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 28243:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28243:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 28243:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28243:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 28243:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28243:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 28243:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28245:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 28245:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 7 previous similar messages Lustre: 28245:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 28245:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 28245:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 28245:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 28245:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 28245:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 28245:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 28245:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 28245:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 28245:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 28778:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0x215:0x0] with magic=0xbd60bd0 Lustre: 27910:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[65538]=1 of non-SEL file [0x200000402:0x215:0x0] with magic=0xbd60bd0 Lustre: 27910:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: 27907:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x2fd:0x0] with magic=0xbd60bd0 Lustre: 27907:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: 28239:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 515 < left 618, rollback = 7 Lustre: 28239:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 28239:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 28239:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28239:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 28239:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28239:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 28239:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28239:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 28239:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28239:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 28239:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message 19[7418]: segfault at 8 ip 00007f39688f87e8 sp 00007fffd9e7bde0 error 4 in ld-2.17.so[7f39688ed000+22000] Lustre: 30796:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 30796:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 3 previous similar messages Lustre: 30796:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 30796:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 30796:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 30796:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 30796:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 30796:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 30796:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 30796:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 30796:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 30796:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 29989:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x519:0x0] with magic=0xbd60bd0 Lustre: 29989:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message 3[14218]: segfault at 8 ip 00007f27b128f7e8 sp 00007ffd56f94ca0 error 4 in ld-2.17.so[7f27b1284000+22000] Lustre: 31104:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 582, rollback = 7 Lustre: 31104:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 39 previous similar messages Lustre: 31104:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 31104:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 39 previous similar messages Lustre: 31104:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 31104:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 39 previous similar messages Lustre: 31104:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/582/0, punch: 0/0/0, quota 1/3/0 Lustre: 31104:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 39 previous similar messages Lustre: 31104:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 31104:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 39 previous similar messages Lustre: 31104:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 31104:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 39 previous similar messages Lustre: 28228:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 514 < left 618, rollback = 7 Lustre: 28228:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 63 previous similar messages Lustre: 28228:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 28228:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 63 previous similar messages Lustre: 28228:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 28228:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 63 previous similar messages Lustre: 28228:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 28228:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 63 previous similar messages Lustre: 28228:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 28228:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 63 previous similar messages Lustre: 28228:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 28228:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 63 previous similar messages Lustre: mdt06_005: service thread pid 31192 was inactive for 40.059 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 30372, comm: mdt00_004 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_object_find_lock+0x54/0x170 [mdt] [<0>] mdt_reint_setxattr+0x231/0x1070 [mdt] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [<0>] mdt_reint_internal+0x76c/0xba0 [mdt] [<0>] mdt_reint+0x67/0x150 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Pid: 2841, comm: mdt00_005 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: Lustre: mdt00_002: service thread pid 27886 was inactive for 40.055 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Lustre: Skipped 2 previous similar messages Pid: 31192, comm: mdt06_005 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Lustre: mdt01_004: service thread pid 30089 was inactive for 62.074 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 3 previous similar messages LustreError: 27876:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88008e3f1300/0xd34a8673a9420114 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x95e:0x0].0x0 bits 0x1b/0x0 rrc: 17 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xd34a8673a94200e3 expref: 451 pid: 27886 timeout: 14608 lvb_type: 0 LustreError: 824:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff880325ef53d8 ns: mdt-lustre-MDT0000_UUID lock: ffff880259c06d00/0xd34a8673a9421025 lrc: 1/0,0 mode: --/PR res: [0x200000401:0x95e:0x0].0x0 bits 0x20/0x0 rrc: 15 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0xd34a8673a9421017 expref: 365 pid: 824 timeout: 0 lvb_type: 0 LustreError: 11-0: lustre-MDT0000-mdc-ffff8802b0d892a8: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff8802b0d892a8: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff8802b0d892a8: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 27629:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x95e:0x0] error: rc = -5 LustreError: 27765:0:(llite_lib.c:1933:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 27677:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff8802b0d892a8: inode [0x200000401:0x90d:0x0] mdc close failed: rc = -108 LustreError: 27677:0:(file.c:246:ll_close_inode_openhandle()) Skipped 1 previous similar message LustreError: 27677:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff8802b0d892a8: [0x200000401:0x8e6:0x0] lock enqueue fails: rc = -108 LustreError: 27521:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000401:0x95e:0x0] error -108. Lustre: lustre-MDT0000-mdc-ffff8802b0d892a8: Connection restored to (at 0@lo) Lustre: 28243:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 515 < left 618, rollback = 7 Lustre: 28243:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 28243:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 28243:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28243:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 28243:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28243:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 28243:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28243:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 28243:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 28243:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 28243:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: format at ldlm_lockd.c:1538:ldlm_handle_enqueue doesn't end in newline Lustre: 27893:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0xe1a:0x0] with magic=0xbd60bd0 Lustre: 27893:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: mdt04_004: service thread pid 30727 was inactive for 40.091 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 4 previous similar messages Lustre: mdt05_000: service thread pid 27899 was inactive for 62.108 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. LustreError: 27876:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8800a127e940/0xd34a8673a957ea98 lrc: 3/0,0 mode: PR/PR res: [0x200000403:0x994:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xd34a8673a957ea6e expref: 831 pid: 27895 timeout: 14737 lvb_type: 0 LustreError: 11-0: lustre-MDT0000-mdc-ffff88029918ca88: operation mds_reint to node 0@lo failed: rc = -107 LustreError: 27901:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff88029a1b0958 ns: mdt-lustre-MDT0000_UUID lock: ffff8802eabe52c0/0xd34a8673a957ef53 lrc: 1/0,0 mode: --/PR res: [0x200000401:0x1:0x0].0x0 bits 0x13/0x0 rrc: 24 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0xd34a8673a957ef45 expref: 545 pid: 27901 timeout: 0 lvb_type: 0 LustreError: 27901:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029918ca88: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 5 previous similar messages LustreError: 167-0: lustre-MDT0000-mdc-ffff88029918ca88: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 27321:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x1:0x0] error: rc = -5 LustreError: 27321:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 41 previous similar messages LustreError: 27385:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029918ca88: inode [0x200000401:0x129f:0x0] mdc close failed: rc = -108 LustreError: 27385:0:(file.c:246:ll_close_inode_openhandle()) Skipped 33 previous similar messages LustreError: 27161:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000403:0x994:0x0] error -108. Lustre: lustre-MDT0000-mdc-ffff88029918ca88: Connection restored to (at 0@lo) Lustre: 28234:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 570, rollback = 7 Lustre: 28234:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 153 previous similar messages Lustre: 28234:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 28234:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 153 previous similar messages Lustre: 28234:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 28234:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 153 previous similar messages Lustre: 28234:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/570/0, punch: 0/0/0, quota 1/3/0 Lustre: 28234:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 153 previous similar messages Lustre: 28234:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 28234:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 153 previous similar messages Lustre: 28234:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 28234:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 153 previous similar messages 7[29494]: segfault at 8 ip 00007fddcf2767e8 sp 00007ffc3bc03e30 error 4 in ld-2.17.so[7fddcf26b000+22000] LustreError: 373:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029918ca88: inode [0x200000404:0x135:0x0] mdc close failed: rc = -13 LustreError: 373:0:(file.c:246:ll_close_inode_openhandle()) Skipped 61 previous similar messages Lustre: 27897:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x2c7:0x0] with magic=0xbd60bd0 Lustre: 27897:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message LustreError: 9237:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff8802b0d892a8: inode [0x200000403:0xdf9:0x0] mdc close failed: rc = -13 LustreError: 9237:0:(file.c:246:ll_close_inode_openhandle()) Skipped 1 previous similar message Lustre: 30173:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x13e3:0x0] with magic=0xbd60bd0 Lustre: 30173:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 3 previous similar messages 12[3573]: segfault at 8 ip 00007fc2f37b87e8 sp 00007fff405ce320 error 4 in ld-2.17.so[7fc2f37ad000+22000] LustreError: 11345:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029918ca88: inode [0x200000403:0x229b:0x0] mdc close failed: rc = -13 LustreError: 11345:0:(file.c:246:ll_close_inode_openhandle()) Skipped 1 previous similar message Lustre: 27889:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x237b:0x0] with magic=0xbd60bd0 Lustre: 27889:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 5 previous similar messages LustreError: 27876:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802dee0da40/0xd34a8673a9970b37 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x1b74:0x0].0x0 bits 0x1b/0x0 rrc: 14 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xd34a8673a9970b1b expref: 812 pid: 31192 timeout: 14923 lvb_type: 0 LustreError: 27886:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802c5b95d28 ns: mdt-lustre-MDT0000_UUID lock: ffff8802415ef480/0xd34a8673a99743b5 lrc: 1/0,0 mode: --/PR res: [0x200000404:0x1b74:0x0].0x0 bits 0x1b/0x0 rrc: 12 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0xd34a8673a99743a7 expref: 726 pid: 27886 timeout: 0 lvb_type: 0 LustreError: 27886:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: 11-0: lustre-MDT0000-mdc-ffff88029918ca88: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: Skipped 2 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029918ca88: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88029918ca88: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 21314:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000404:0x1b74:0x0] error: rc = -5 LustreError: 21336:0:(llite_lib.c:1933:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 21336:0:(llite_lib.c:1933:ll_md_setattr()) Skipped 2 previous similar messages LustreError: 21314:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 32 previous similar messages LustreError: 21425:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029918ca88: inode [0x200000401:0x1:0x0] mdc close failed: rc = -108 LustreError: 21425:0:(file.c:246:ll_close_inode_openhandle()) Skipped 1 previous similar message LustreError: 20801:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000404:0x1b74:0x0] error -108. Lustre: lustre-MDT0000-mdc-ffff88029918ca88: Connection restored to (at 0@lo) Lustre: 28230:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 28230:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 415 previous similar messages Lustre: 28230:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 28230:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 415 previous similar messages Lustre: 28230:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 28230:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 415 previous similar messages Lustre: 28230:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 28230:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 415 previous similar messages Lustre: 28230:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 28230:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 415 previous similar messages Lustre: 28230:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 28230:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 415 previous similar messages Lustre: 27905:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0x2a8:0x0] with magic=0xbd60bd0 Lustre: 27905:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 1 previous similar message Lustre: format at mdc_locks.c:745:mdc_finish_enqueue doesn't end in newline Lustre: format at client.c:2228:ptlrpc_check_set doesn't end in newline LustreError: 17089:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff8802b0d892a8: inode [0x200000403:0x385f:0x0] mdc close failed: rc = -13 LustreError: 17089:0:(file.c:246:ll_close_inode_openhandle()) Skipped 13 previous similar messages 1[3048]: segfault at 8 ip 00007f8ecd8de7e8 sp 00007ffd4d67dcf0 error 4 in ld-2.17.so[7f8ecd8d3000+22000] 11[10857]: segfault at 8 ip 00007f311e2e37e8 sp 00007ffcf3789ce0 error 4 in ld-2.17.so[7f311e2d8000+22000] 2[12073]: segfault at 8 ip 00007fb49953f7e8 sp 00007fff170d2820 error 4 in ld-2.17.so[7fb499534000+22000] Lustre: format at service.c:2372:ptlrpc_server_handle_request doesn't end in newline 19[27176]: segfault at 8 ip 00007efcc31b37e8 sp 00007fff41626810 error 4 2[27023]: segfault at 8 ip 00007f837dfde7e8 sp 00007ffe19820750 error 4 in ld-2.17.so[7f837dfd3000+22000] in ld-2.17.so[7efcc31a8000+22000] 10[1425]: segfault at 8 ip 00007fe97d5347e8 sp 00007ffed10950a0 error 4 in ld-2.17.so[7fe97d529000+22000] LustreError: 14423:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029918ca88: inode [0x200000405:0x24fa:0x0] mdc close failed: rc = -13 LustreError: 14423:0:(file.c:246:ll_close_inode_openhandle()) Skipped 5 previous similar messages Lustre: 8071:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0x2821:0x0] with magic=0xbd60bd0 Lustre: 8071:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 17 previous similar messages 18[32714]: segfault at 8 ip 00007f965050a7e8 sp 00007ffc0c92f010 error 4 in ld-2.17.so[7f96504ff000+22000] 7[2244]: segfault at 8 ip 00007f2a09dc37e8 sp 00007fffdb403860 error 4 in ld-2.17.so[7f2a09db8000+22000] ptlrpc_watchdog_fire: 13 callbacks suppressed Lustre: mdt02_000: service thread pid 27890 was inactive for 40.022 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 27890, comm: mdt02_000 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 27876:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802a57c43c0/0xd34a8673aa43042f lrc: 3/0,0 mode: PR/PR res: [0x200000405:0x46f7:0x0].0x0 bits 0x1b/0x0 rrc: 8 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xd34a8673aa4303e9 expref: 2324 pid: 8071 timeout: 15248 lvb_type: 0 LustreError: 27854:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1692992340 with bad export cookie 15225129321604008118 LustreError: 11-0: lustre-MDT0000-mdc-ffff88029918ca88: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff88029918ca88: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88029918ca88: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 26538:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000405:0x46f7:0x0] error: rc = -5 LustreError: 26538:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 21 previous similar messages LustreError: 26440:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff88029918ca88: inode [0x200000405:0x46e2:0x0] mdc close failed: rc = -108 LustreError: 26440:0:(file.c:246:ll_close_inode_openhandle()) Skipped 3 previous similar messages LustreError: 26616:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88029918ca88: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 Lustre: lustre-MDT0000-mdc-ffff88029918ca88: Connection restored to (at 0@lo) Lustre: 28247:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 28247:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 1111 previous similar messages Lustre: 28247:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 28247:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1111 previous similar messages Lustre: 28247:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 28247:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1111 previous similar messages Lustre: 28247:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 28247:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1111 previous similar messages Lustre: 28247:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 28247:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1111 previous similar messages Lustre: 28247:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 28247:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1111 previous similar messages 17[24091]: segfault at 4045bc ip 00000000004045bc sp 00007fff84cbb8e8 error 7 in 17[400000+6000] 2[27018]: segfault at 8 ip 00007eff30ca17e8 sp 00007ffdc2d69c90 error 4 in ld-2.17.so[7eff30c96000+22000] Lustre: format at client.c:1746:ptlrpc_send_new_req doesn't end in newline Lustre: mdt05_003: service thread pid 30173 was inactive for 62.147 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 30089, comm: mdt01_004 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_object_find_lock+0x54/0x170 [mdt] [<0>] mdt_reint_setxattr+0x231/0x1070 [mdt] [<0>] mdt_reint_rec+0x87/0x240 [mdt] [<0>] mdt_reint_internal+0x76c/0xba0 [mdt] [<0>] mdt_reint+0x67/0x150 [mdt] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Lustre: Skipped 1 previous similar message Pid: 30173, comm: mdt05_003 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 27876:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8800957ec780/0xd34a8673aa67c0b6 lrc: 3/0,0 mode: CR/CR res: [0x200000406:0xf50:0x0].0x0 bits 0xa/0x0 rrc: 6 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xd34a8673aa67c0a8 expref: 693 pid: 17627 timeout: 15402 lvb_type: 0 LustreError: 31460:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802a736b7e8 ns: mdt-lustre-MDT0000_UUID lock: ffff88008e37ed00/0xd34a8673aa67dabe lrc: 1/0,0 mode: --/PR res: [0x200000406:0xf50:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0xd34a8673aa67daa2 expref: 653 pid: 31460 timeout: 0 lvb_type: 0 LustreError: 31460:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: 11-0: lustre-MDT0000-mdc-ffff88029918ca88: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: 27861:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1692992494 with bad export cookie 15225129321615274303 Lustre: lustre-MDT0000-mdc-ffff88029918ca88: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88029918ca88: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: Skipped 1 previous similar message LustreError: 11130:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000406:0xf50:0x0] error -108. LustreError: 11130:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000406:0xf50:0x0] error: rc = -108 LustreError: 11130:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 7 previous similar messages LustreError: 11354:0:(mdc_request.c:1465:mdc_read_page()) lustre-MDT0000-mdc-ffff88029918ca88: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 11354:0:(mdc_request.c:1465:mdc_read_page()) Skipped 29 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029918ca88: Connection restored to (at 0@lo) 17[20195]: segfault at 8 ip 00007f4aec73e7e8 sp 00007ffdd4d3ce50 error 4 in ld-2.17.so[7f4aec733000+22000] Lustre: mdt00_008: service thread pid 4833 was inactive for 40.041 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 3 previous similar messages ptlrpc_watchdog_fire: 2 callbacks suppressed Lustre: mdt03_004: service thread pid 30153 was inactive for 62.041 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 30153, comm: mdt03_004 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock_try+0xa0/0x250 [mdt] [<0>] mdt_getattr_name_lock+0x1889/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 27876:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802dcdcf840/0xd34a8673aa7d9866 lrc: 3/0,0 mode: PR/PR res: [0x200000403:0x864c:0x0].0x0 bits 0x1b/0x0 rrc: 13 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xd34a8673aa7d9843 expref: 4424 pid: 830 timeout: 15529 lvb_type: 0 LustreError: 4833:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802d8440958 ns: mdt-lustre-MDT0000_UUID lock: ffff8802b5c5c000/0xd34a8673aa7da55c lrc: 1/0,0 mode: --/PR res: [0x200000403:0x864c:0x0].0x0 bits 0x20/0x0 rrc: 11 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0xd34a8673aa7da54e expref: 4390 pid: 4833 timeout: 0 lvb_type: 0 LustreError: 11-0: lustre-MDT0000-mdc-ffff8802b0d892a8: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff8802b0d892a8: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 5 previous similar messages LustreError: 167-0: lustre-MDT0000-mdc-ffff8802b0d892a8: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 7784:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000403:0x864c:0x0] error: rc = -5 LustreError: 7799:0:(llite_lib.c:1933:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 7799:0:(llite_lib.c:1933:ll_md_setattr()) Skipped 2 previous similar messages LustreError: 7896:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000403:0x8629:0x0] error -108. LustreError: 7784:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 88 previous similar messages LustreError: 8014:0:(file.c:246:ll_close_inode_openhandle()) lustre-clilmv-ffff8802b0d892a8: inode [0x200000401:0x1:0x0] mdc close failed: rc = -108 LustreError: 8014:0:(file.c:246:ll_close_inode_openhandle()) Skipped 54 previous similar messages Lustre: lustre-MDT0000-mdc-ffff8802b0d892a8: Connection restored to (at 0@lo) 18[8426]: segfault at 8 ip 00007f711c7397e8 sp 00007ffdb3434ed0 error 4 in ld-2.17.so[7f711c72e000+22000] Lustre: format at client.c:1746:ptlrpc_send_new_req doesn't end in newline Lustre: 27897:0:(lod_lov.c:1327:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000408:0x856:0x0] with magic=0xbd60bd0 Lustre: 27897:0:(lod_lov.c:1327:lod_parse_striping()) Skipped 9 previous similar messages Lustre: mdt03_001: service thread pid 27894 was inactive for 62.082 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Pid: 27894, comm: mdt03_001 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_getattr_name_lock+0xbd3/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe Pid: 27893, comm: mdt03_000 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022 Call Trace: [<0>] ldlm_completion_ast+0x923/0xc80 [ptlrpc] [<0>] ldlm_cli_enqueue_local+0x259/0x880 [ptlrpc] [<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt] [<0>] mdt_object_lock+0x88/0x1c0 [mdt] [<0>] mdt_getattr_name_lock+0xbd3/0x2b20 [mdt] [<0>] mdt_intent_getattr+0x2c5/0x4b0 [mdt] [<0>] mdt_intent_opc+0x1dc/0xc40 [mdt] [<0>] mdt_intent_policy+0xfa/0x460 [mdt] [<0>] ldlm_lock_enqueue+0x3b1/0xbb0 [ptlrpc] [<0>] ldlm_handle_enqueue+0x375/0x17d0 [ptlrpc] [<0>] tgt_enqueue+0x68/0x240 [ptlrpc] [<0>] tgt_request_handle+0x88e/0x19b0 [ptlrpc] [<0>] ptlrpc_server_handle_request+0x251/0xc00 [ptlrpc] [<0>] ptlrpc_main+0xc66/0x1670 [ptlrpc] [<0>] kthread+0xe4/0xf0 [<0>] ret_from_fork_nospec_begin+0x7/0x21 [<0>] 0xfffffffffffffffe LustreError: 27876:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88023b94c000/0xd34a8673aa9466b3 lrc: 3/0,0 mode: PR/PR res: [0x200000407:0x1317:0x0].0x0 bits 0x1b/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xd34a8673aa946697 expref: 748 pid: 27899 timeout: 15667 lvb_type: 0 LustreError: 27887:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8803276d6fc8 ns: mdt-lustre-MDT0000_UUID lock: ffff8802561c8b80/0xd34a8673aa946e69 lrc: 1/0,0 mode: --/PR res: [0x200000401:0x1:0x0].0x0 bits 0x13/0x0 rrc: 22 type: IBT gid 0 flags: 0x54a01000000000 nid: 0@lo remote: 0xd34a8673aa946e46 expref: 698 pid: 27887 timeout: 0 lvb_type: 0 LustreError: 19237:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1692992758 with bad export cookie 15225129321617679209 Lustre: lustre-MDT0000-mdc-ffff88029918ca88: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff88029918ca88: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 7373:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x1:0x0] error: rc = -5 LustreError: 7373:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 10 previous similar messages LustreError: 27887:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages Lustre: lustre-MDT0000-mdc-ffff88029918ca88: Connection restored to (at 0@lo) LustreError: 27876:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802a6c4b880/0xd34a8673aaa14ce5 lrc: 3/0,0 mode: CR/CR res: [0x200000408:0xf18:0x0].0x0 bits 0xa/0x0 rrc: 11 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xd34a8673aaa14c6e expref: 642 pid: 31460 timeout: 15796 lvb_type: 0 LustreError: 27850:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1692992888 with bad export cookie 15225129321619112438 LustreError: 11-0: lustre-MDT0000-mdc-ffff8802b0d892a8: operation mds_close to node 0@lo failed: rc = -107 LustreError: Skipped 2 previous similar messages Lustre: lustre-MDT0000-mdc-ffff8802b0d892a8: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff8802b0d892a8: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 24314:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x200000408:0xf18:0x0] error -108. LustreError: 24314:0:(vvp_io.c:1879:vvp_io_init()) Skipped 2 previous similar messages LustreError: 24415:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 24415:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 40 previous similar messages Lustre: lustre-MDT0000-mdc-ffff8802b0d892a8: Connection restored to (at 0@lo) Lustre: 31459:0:(osd_internal.h:1357:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 31459:0:(osd_internal.h:1357:osd_trans_exec_op()) Skipped 697 previous similar messages Lustre: 31459:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 31459:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 697 previous similar messages Lustre: 31459:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 31459:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 697 previous similar messages Lustre: 31459:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 31459:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 697 previous similar messages Lustre: 31459:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 31459:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 697 previous similar messages Lustre: 31459:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 31459:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 697 previous similar messages LustreError: 27876:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802a26d7840/0xd34a8673aaa5b7b9 lrc: 3/0,0 mode: PR/PR res: [0x20000040a:0x1fe:0x0].0x0 bits 0x1b/0x0 rrc: 21 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xd34a8673aaa5b79d expref: 120 pid: 8071 timeout: 15910 lvb_type: 0 LustreError: 3755:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) ### lock on destroyed export ffff8803276d53d8 ns: mdt-lustre-MDT0000_UUID lock: ffff8802a12aa5c0/0xd34a8673aaa5c26a lrc: 3/0,0 mode: PR/PR res: [0x20000040a:0x1fe:0x0].0x0 bits 0x1b/0x0 rrc: 17 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xd34a8673aaa5c240 expref: 29 pid: 3755 timeout: 0 lvb_type: 0 LustreError: 27870:0:(ldlm_lockd.c:2572:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1692993002 with bad export cookie 15225129321621445356 LustreError: 11-0: lustre-MDT0000-mdc-ffff8802b0d892a8: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff8802b0d892a8: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 167-0: lustre-MDT0000-mdc-ffff8802b0d892a8: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 31960:0:(file.c:5360:ll_inode_revalidate_fini()) lustre: revalidate FID [0x20000040a:0x1fe:0x0] error: rc = -5 LustreError: 31960:0:(file.c:5360:ll_inode_revalidate_fini()) Skipped 4 previous similar messages LustreError: 31849:0:(vvp_io.c:1879:vvp_io_init()) lustre: refresh file layout [0x20000040a:0x1fe:0x0] error -108. Lustre: lustre-MDT0000-mdc-ffff8802b0d892a8: Connection restored to (at 0@lo) LustreError: 3755:0:(ldlm_lockd.c:1479:ldlm_handle_enqueue()) Skipped 7 previous similar messages 11[7857]: segfault at 8 ip 00007f7d0764e7e8 sp 00007ffe7059ebd0 error 4 in ld-2.17.so[7f7d07643000+22000] 19[14114]: segfault at 8 ip 00007f147c2a27e8 sp 00007ffc02a67d90 error 4 in ld-2.17.so[7f147c297000+22000] 9[14255]: segfault at 8 ip 00007fb5327817e8 sp 00007fffde5ec620 error 4 in ld-2.17.so[7fb532776000+22000] 9[19375]: segfault at 0 ip (null) sp 00007fff9f0c1608 error 14 in 9[400000+6000] 12[29295]: segfault at 0 ip (null) sp 00007ffe1a8303a8 error 14 in 12[400000+6000] | Link to test |
sanity-flr test 0d: lfs mirror extend with -N option | LustreError: 22792:0:(lod_object.c:4514:lod_xattr_set()) ASSERTION( (!!(!strcmp(name, "lustre.""lov") || !strcmp(name, "trusted.lov")) == !!(!lod_dt_obj(dt)->ldo_comp_cached)) ) failed: LustreError: 22792:0:(lod_object.c:4514:lod_xattr_set()) LBUG Pid: 22792, comm: mdt_rdpg00_001 3.10.0-1160.31.1.el7_lustre.ddn15.x86_64 #1 SMP Fri Jul 2 20:27:10 UTC 2021 Call Trace: [<ffffffffc096c7cc>] libcfs_call_trace+0x8c/0xc0 [libcfs] [<ffffffffc096c87c>] lbug_with_loc+0x4c/0xa0 [libcfs] [<ffffffffc13cdc4c>] lod_xattr_set+0xe3c/0xe80 [lod] [<ffffffffc143efa5>] mdo_xattr_set+0x75/0x190 [mdd] [<ffffffffc144cd5f>] mdd_xattr_set+0x159f/0x18c0 [mdd] [<ffffffffc12faae3>] mo_xattr_set+0x46/0x48 [mdt] [<ffffffffc12c94a0>] mdt_close_handle_layouts+0x8b0/0xc10 [mdt] [<ffffffffc12c9d43>] mdt_mfd_close+0x543/0x870 [mdt] [<ffffffffc12cf911>] mdt_close_internal+0x121/0x220 [mdt] [<ffffffffc12cfc6b>] mdt_close+0x25b/0x7d0 [mdt] [<ffffffffc0f0480e>] tgt_request_handle+0xaee/0x15f0 [ptlrpc] [<ffffffffc0eab7cb>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [<ffffffffc0eaf134>] ptlrpc_main+0xb34/0x1470 [ptlrpc] [<ffffffff8e6c5e31>] kthread+0xd1/0xe0 [<ffffffff8ed95df7>] ret_from_fork_nospec_end+0x0/0x39 [<ffffffffffffffff>] 0xffffffffffffffff | Link to test | |
hot-pools test 8: lamigo: start with debug (-b) command line option | LustreError: 14512:0:(lod_object.c:4514:lod_xattr_set()) ASSERTION( (!!(!strcmp(name, "lustre.""lov") || !strcmp(name, "trusted.lov")) == !!(!lod_dt_obj(dt)->ldo_comp_cached)) ) failed: LustreError: 14512:0:(lod_object.c:4514:lod_xattr_set()) LBUG Pid: 14512, comm: mdt_rdpg00_000 3.10.0-1160.31.1.el7_lustre.ddn15.x86_64 #1 SMP Fri Jul 2 20:27:10 UTC 2021 Call Trace: [<ffffffffc09087cc>] libcfs_call_trace+0x8c/0xc0 [libcfs] [<ffffffffc090887c>] lbug_with_loc+0x4c/0xa0 [libcfs] [<ffffffffc1323c4c>] lod_xattr_set+0xe3c/0xe80 [lod] [<ffffffffc1394fa5>] mdo_xattr_set+0x75/0x190 [mdd] [<ffffffffc13a2d5f>] mdd_xattr_set+0x159f/0x18c0 [mdd] [<ffffffffc1250ae3>] mo_xattr_set+0x46/0x48 [mdt] [<ffffffffc121f4a0>] mdt_close_handle_layouts+0x8b0/0xc10 [mdt] [<ffffffffc121fd43>] mdt_mfd_close+0x543/0x870 [mdt] [<ffffffffc1225911>] mdt_close_internal+0x121/0x220 [mdt] [<ffffffffc1225c6b>] mdt_close+0x25b/0x7d0 [mdt] [<ffffffffc0e0c80e>] tgt_request_handle+0xaee/0x15f0 [ptlrpc] [<ffffffffc0db37cb>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [<ffffffffc0db7134>] ptlrpc_main+0xb34/0x1470 [ptlrpc] [<ffffffffb0ec5e31>] kthread+0xd1/0xe0 [<ffffffffb1595df7>] ret_from_fork_nospec_end+0x0/0x39 [<ffffffffffffffff>] 0xffffffffffffffff | Lustre: DEBUG MARKER: Lustre: Mounted lustre-client Lustre: DEBUG MARKER: mount | grep /mnt/lustre' ' Lustre: DEBUG MARKER: PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/ Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 08:15:25 \(1625818525\) Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 08:15:25 \(1625818525\) Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 08:15:25 \(1625818525\) Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 08:15:25 \(1625818525\) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 08:15:25 (1625818525) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 08:15:25 (1625818525) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 08:15:25 (1625818525) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 08:15:25 (1625818525) Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 08:15:29 \(1625818529\) Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 08:15:29 \(1625818529\) Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 08:15:29 \(1625818529\) Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 08:15:29 \(1625818529\) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 08:15:29 (1625818529) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 08:15:29 (1625818529) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 08:15:29 (1625818529) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 08:15:29 (1625818529) Lustre: DEBUG MARKER: /usr/sbin/lctl mark trevis-65vm9.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: /usr/sbin/lctl mark trevis-65vm8.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: /usr/sbin/lctl mark trevis-65vm8.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: /usr/sbin/lctl mark trevis-65vm7.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: trevis-65vm9.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: trevis-65vm8.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: trevis-65vm8.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: trevis-65vm7.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: /usr/sbin/lctl get_param mdd.lustre-MDT0000.changelog_mask -n Lustre: DEBUG MARKER: /usr/sbin/lctl set_param mdd.lustre-MDT0000.changelog_mask=+hsm Lustre: DEBUG MARKER: /usr/sbin/lctl --device lustre-MDT0000 changelog_register -n Lustre: lustre-MDD0000: changelog on Lustre: Skipped 1 previous similar message Lustre: DEBUG MARKER: /usr/sbin/lctl get_param mdd.lustre-MDT0002.changelog_mask -n Lustre: DEBUG MARKER: /usr/sbin/lctl set_param mdd.lustre-MDT0002.changelog_mask=+hsm Lustre: DEBUG MARKER: /usr/sbin/lctl --device lustre-MDT0002 changelog_register -n Lustre: DEBUG MARKER: lctl pool_new lustre.fast Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.fast 2>/dev/null || echo foo Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.fast 2>/dev/null || echo foo Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0002-mdtlov.pools.fast 2>/dev/null || echo foo Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0002-mdtlov.pools.fast 2>/dev/null || echo foo Lustre: DEBUG MARKER: /usr/sbin/lctl pool_add lustre.fast lustre-OST[0-3/1] Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.fast | Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.fast | Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0002-mdtlov.pools.fast | Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0002-mdtlov.pools.fast | Lustre: DEBUG MARKER: lctl pool_new lustre.slow Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.slow 2>/dev/null || echo foo Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.slow 2>/dev/null || echo foo Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0002-mdtlov.pools.slow 2>/dev/null || echo foo Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0002-mdtlov.pools.slow 2>/dev/null || echo foo Lustre: DEBUG MARKER: /usr/sbin/lctl pool_add lustre.slow lustre-OST[4-7/1] Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.slow | Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.slow | Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0002-mdtlov.pools.slow | Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0002-mdtlov.pools.slow | Lustre: DEBUG MARKER: /usr/sbin/lctl pool_list lustre.fast Lustre: DEBUG MARKER: /usr/sbin/lctl pool_list lustre.fast Lustre: DEBUG MARKER: lamigo -m lustre-MDT0000 -M /mnt/lustre -g trevis-65vm7:/mnt/lustre:8 -u cl9 -s fast -t slow -a 30 -b &> /autotest/autotest-2/2021-07-09/lustre-b_es-reviews_review-dne-part-7_4542_1_27_10d4cae0-2acf-4240-84ea-6a2353a00ee5/hot-pools.test_8.lamigo-lustre-MD Lustre: DEBUG MARKER: cat /var/run/lamigo-lustre-MDT0000.pid Lustre: DEBUG MARKER: pkill --pidfile=/var/run/lamigo-lustre-MDT0000.pid --signal=0 lamigo | Link to test |
hot-pools test 8: lamigo: start with debug (-b) command line option | LustreError: 27351:0:(lod_object.c:4514:lod_xattr_set()) ASSERTION( (!!(!strcmp(name, "lustre.""lov") || !strcmp(name, "trusted.lov")) == !!(!lod_dt_obj(dt)->ldo_comp_cached)) ) failed: LustreError: 27351:0:(lod_object.c:4514:lod_xattr_set()) LBUG Pid: 27351, comm: mdt_rdpg00_002 3.10.0-1160.31.1.el7_lustre.ddn15.x86_64 #1 SMP Fri Jul 2 20:27:10 UTC 2021 Call Trace: [<ffffffffc0a327cc>] libcfs_call_trace+0x8c/0xc0 [libcfs] [<ffffffffc0a3287c>] lbug_with_loc+0x4c/0xa0 [libcfs] [<ffffffffc149cc4c>] lod_xattr_set+0xe3c/0xe80 [lod] [<ffffffffc150dfa5>] mdo_xattr_set+0x75/0x190 [mdd] [<ffffffffc151bd5f>] mdd_xattr_set+0x159f/0x18c0 [mdd] [<ffffffffc13c9ae3>] mo_xattr_set+0x46/0x48 [mdt] [<ffffffffc13984a0>] mdt_close_handle_layouts+0x8b0/0xc10 [mdt] [<ffffffffc1398d43>] mdt_mfd_close+0x543/0x870 [mdt] [<ffffffffc139e911>] mdt_close_internal+0x121/0x220 [mdt] [<ffffffffc139ec6b>] mdt_close+0x25b/0x7d0 [mdt] [<ffffffffc0fca80e>] tgt_request_handle+0xaee/0x15f0 [ptlrpc] [<ffffffffc0f717cb>] ptlrpc_server_handle_request+0x24b/0xab0 [ptlrpc] [<ffffffffc0f75134>] ptlrpc_main+0xb34/0x1470 [ptlrpc] [<ffffffffaa2c5e31>] kthread+0xd1/0xe0 [<ffffffffaa995df7>] ret_from_fork_nospec_end+0x0/0x39 [<ffffffffffffffff>] 0xffffffffffffffff | Lustre: DEBUG MARKER: Lustre: Mounted lustre-client Lustre: DEBUG MARKER: mount | grep /mnt/lustre' ' Lustre: DEBUG MARKER: PATH=/usr/lib64/lustre/tests:/usr/lib/lustre/tests:/usr/lib64/lustre/tests:/opt/iozone/bin:/opt/iozone/bin:/opt/iozone/bin:/opt/iozone/bin:/usr/lib64/lustre/tests/mpi:/usr/lib64/lustre/tests/racer:/usr/lib64/lustre/../lustre-iokit/sgpdd-survey:/usr/lib64/ Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 07:40:01 \(1625816401\) Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 07:40:01 \(1625816401\) Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 07:40:01 \(1625816401\) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 07:40:01 (1625816401) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 07:40:01 (1625816401) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 07:40:01 (1625816401) Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl get_param -n version 2>/dev/null || Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 07:40:04 \(1625816404\) Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 07:40:04 \(1625816404\) Lustre: DEBUG MARKER: /usr/sbin/lctl mark == rpc test complete, duration -o sec ================================================================ 07:40:04 \(1625816404\) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 07:40:04 (1625816404) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 07:40:04 (1625816404) Lustre: DEBUG MARKER: == rpc test complete, duration -o sec ================================================================ 07:40:04 (1625816404) Lustre: DEBUG MARKER: /usr/sbin/lctl mark trevis-23vm8.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: /usr/sbin/lctl mark trevis-23vm8.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: /usr/sbin/lctl mark trevis-23vm7.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: trevis-23vm8.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: trevis-23vm8.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: trevis-23vm7.trevis.whamcloud.com: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 4 Lustre: DEBUG MARKER: /usr/sbin/lctl get_param mdd.lustre-MDT0000.changelog_mask -n Lustre: DEBUG MARKER: /usr/sbin/lctl set_param mdd.lustre-MDT0000.changelog_mask=+hsm Lustre: DEBUG MARKER: /usr/sbin/lctl --device lustre-MDT0000 changelog_register -n Lustre: lustre-MDD0000: changelog on Lustre: DEBUG MARKER: lctl pool_new lustre.fast Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.fast 2>/dev/null || echo foo Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.fast 2>/dev/null || echo foo Lustre: DEBUG MARKER: /usr/sbin/lctl pool_add lustre.fast lustre-OST[0-2/1] Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.fast | Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.fast | Lustre: DEBUG MARKER: lctl pool_new lustre.slow Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.slow 2>/dev/null || echo foo Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.slow 2>/dev/null || echo foo Lustre: DEBUG MARKER: /usr/sbin/lctl pool_add lustre.slow lustre-OST[3-6/1] Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.slow | Lustre: DEBUG MARKER: lctl get_param -n lod.lustre-MDT0000-mdtlov.pools.slow | Lustre: DEBUG MARKER: /usr/sbin/lctl pool_list lustre.fast Lustre: DEBUG MARKER: /usr/sbin/lctl pool_list lustre.fast Lustre: DEBUG MARKER: lamigo -m lustre-MDT0000 -M /mnt/lustre -g trevis-23vm7:/mnt/lustre:8 -u cl9 -s fast -t slow -a 30 -b &> /autotest/autotest-2/2021-07-09/lustre-b_es-reviews_custom_4542_1_101_6ab56a2d-3692-4c5d-81d5-937416672129/hot-pools.test_8.lamigo-lustre-MDT0000_log. Lustre: DEBUG MARKER: cat /var/run/lamigo-lustre-MDT0000.pid Lustre: DEBUG MARKER: pkill --pidfile=/var/run/lamigo-lustre-MDT0000.pid --signal=0 lamigo | Link to test |