Editing crashreport #1500

ReasonCrashing FunctionWhere to cut BacktraceReports Count
ASSERTION( S_ISDIR(mdd_object_type(obj)) ) failedmdd_parent_fidmdd_parent_fid
mdd_is_parent
mdd_is_subdir
mdt_reint_rename
mdt_reint_rec
mdt_reint_internal
mdt_reint
tgt_request_handle
ptlrpc_server_handle_request
ptlrpc_main
kthread
7

Added fields:

Match messages in logs
(every line would be required to be present in log output
Copy from "Messages before crash" column below):
Match messages in full crash
(every line would be required to be present in crash log output
Copy from "Full Crash" column below):
Limit to a test:
(Copy from below "Failing text"):
Delete these reports as invalid (real bug in review or some such)
Bug or comment:
Extra info:

Failures list (last 100):

Failing TestFull CrashMessages before crashComment
racer test 1: racer on clients: centos-15.localnet DURATION=2700
LustreError: 31362:0:(mdd_dir.c:213:mdd_parent_fid()) ASSERTION( S_ISDIR(mdd_object_type(obj)) ) failed: lustre-MDD0000: FID [0x200000003:0xa:0x0] is not a directory type = 100000
LustreError: 31362:0:(mdd_dir.c:213:mdd_parent_fid()) LBUG
CPU: 2 PID: 31362 Comm: mdt_io00_007 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #2
Hardware name: Red Hat KVM, BIOS 1.16.0-3.module_el8.7.0+1218+f626c2ff 04/01/2014
Call Trace:
[<ffffffff817d93f8>] dump_stack+0x19/0x1b
[<ffffffffa0166b4d>] lbug_with_loc+0x4d/0xa0 [libcfs]
[<ffffffffa124a5e7>] mdd_parent_fid+0x3d7/0x3e0 [mdd]
[<ffffffffa124a9d0>] mdd_is_parent+0xd0/0x1a0 [mdd]
[<ffffffffa124acac>] mdd_is_subdir+0x20c/0x250 [mdd]
[<ffffffffa1300c2f>] mdt_reint_rename+0xa9f/0x3ae0 [mdt]
[<ffffffffa0603b67>] ? lustre_msg_add_version+0x27/0xa0 [ptlrpc]
[<ffffffffa03d554e>] ? lu_ucred+0x1e/0x30 [obdclass]
[<ffffffffa130d5b7>] mdt_reint_rec+0x87/0x240 [mdt]
[<ffffffffa12e349c>] mdt_reint_internal+0x73c/0xbb0 [mdt]
[<ffffffffa12e75b5>] ? mdt_thread_info_init+0xa5/0xc0 [mdt]
[<ffffffffa12ea3a7>] mdt_reint+0x67/0x150 [mdt]
[<ffffffffa06d7eee>] tgt_request_handle+0x74e/0x1a60 [ptlrpc]
[<ffffffffa0613dfd>] ptlrpc_server_handle_request+0x23d/0xd80 [ptlrpc]
[<ffffffffa0615cb1>] ptlrpc_main+0xc61/0x1640 [ptlrpc]
[<ffffffff810dbb51>] ? put_prev_entity+0x31/0x400
[<ffffffffa0615050>] ? ptlrpc_wait_event+0x630/0x630 [ptlrpc]
[<ffffffff810ba114>] kthread+0xe4/0xf0
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
[<ffffffff817ede5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
LustreError: 14983:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0001: migrate [0x240000403:0x1:0x0]/8 failed: rc = -2
0[28530]: segfault at 8 ip 00007f7afb48f7e8 sp 00007fff1902f740 error 4 in ld-2.17.so[7f7afb484000+22000]
Lustre: 28193:0:(mdd_dir.c:4826:mdd_migrate_object()) lustre-MDD0002: [0x280000403:0x1:0x0]/9 is open, migrate only dentry
chown (28176) used greatest stack depth: 10064 bytes left
Lustre: 28231:0:(mdd_dir.c:4826:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x2:0x0]/16 is open, migrate only dentry
Lustre: 14983:0:(mdd_dir.c:4826:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x2:0x0]/5 is open, migrate only dentry
Lustre: 14983:0:(mdd_dir.c:4826:mdd_migrate_object()) Skipped 1 previous similar message
Lustre: 14983:0:(mdd_dir.c:4826:mdd_migrate_object()) lustre-MDD0001: [0x240000403:0x1:0x0]/14 is open, migrate only dentry
LustreError: 14983:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0002: migrate [0x280000403:0x1:0x0]/19 failed: rc = -2
Lustre: 14983:0:(mdd_dir.c:4826:mdd_migrate_object()) lustre-MDD0001: [0x240000403:0x1:0x0]/3 is open, migrate only dentry
Lustre: 14983:0:(mdd_dir.c:4826:mdd_migrate_object()) Skipped 1 previous similar message
LustreError: 30895:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8800a1ef53d8: inode [0x200000404:0x58:0x0] mdc close failed: rc = -13
LustreError: 32521:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8800a1ef53d8: inode [0x200000404:0x58:0x0] mdc close failed: rc = -13
Lustre: 3261:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000404:0x81:0x0] with magic=0xbd60bd0
LustreError: 14983:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000404:0x65:0x0]/12 failed: rc = -116
LustreError: 4145:0:(mdd_dir.c:4747:mdd_migrate_cmd_check()) lustre-MDD0000: '8' migration was interrupted, run 'lfs migrate -m 1 -c 3 -H crush 8' to finish migration: rc = -1
7[4321]: segfault at 8 ip 00007ff2db5187e8 sp 00007ffcd79d29b0 error 4 in ld-2.17.so[7ff2db50d000+22000]
Lustre: 4148:0:(mdd_dir.c:4826:mdd_migrate_object()) lustre-MDD0002: [0x280000403:0x1:0x0]/10 is open, migrate only dentry
Lustre: 4148:0:(mdd_dir.c:4826:mdd_migrate_object()) Skipped 2 previous similar messages
Lustre: dir [0x240000404:0xac:0x0] stripe 3 readdir failed: -2, directory is partially accessed!
Lustre: 3316:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x39:0x0] with magic=0xbd60bd0
Lustre: 3316:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 31473:0:(mdd_dir.c:4747:mdd_migrate_cmd_check()) lustre-MDD0002: '7' migration was interrupted, run 'lfs migrate -m 1 -c 3 -H crush 7' to finish migration: rc = -1
LustreError: 31473:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000403:0x1:0x0]/7 failed: rc = -1
LustreError: 31473:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 1 previous similar message
LustreError: 30648:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0001: unable to read [0x240000402:0x7:0x0] ACL: rc = -2
LustreError: 31647:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8800a1ef53d8: inode [0x240000403:0x5e:0x0] mdc close failed: rc = -2
LustreError: 9196:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cb626678: inode [0x280000404:0xa8:0x0] mdc close failed: rc = -13
LustreError: 9579:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x94:0x0]: rc = -5
LustreError: 9579:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 9579:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000404:0xc5:0x0]: rc = -5
LustreError: 9579:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 4882:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0002: migrate [0x280000403:0x1:0x0]/3 failed: rc = -71
Lustre: 27696:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x197:0x0] with magic=0xbd60bd0
Lustre: 27696:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message
Lustre: 11044:0:(mdd_dir.c:4826:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/4 is open, migrate only dentry
Lustre: 11044:0:(mdd_dir.c:4826:mdd_migrate_object()) Skipped 4 previous similar messages
LustreError: 27494:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0000: [0x200000404:0x1e8:0x0] migrate mdt count mismatch 3 != 1
Lustre: 30661:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff88029ed3b240 x1835393460152832/t4294971140(0) o101->5ed4782a-1955-419e-baa0-e05842cc559d@0@lo:723/0 lens 376/864 e 0 to 0 dl 1750367808 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0
LustreError: 11044:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0002: migrate [0x240000404:0x184:0x0]/17 failed: rc = -2
LustreError: 11044:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 2 previous similar messages
LustreError: 28231:0:(mdd_dir.c:4747:mdd_migrate_cmd_check()) lustre-MDD0000: '8' migration was interrupted, run 'lfs migrate -m 1 -c 3 -H crush 8' to finish migration: rc = -1
LustreError: 12424:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0xd4:0x0]: rc = -5
LustreError: 12424:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
Lustre: 29922:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8803230d8540 x1835393463003648/t4294973536(0) o101->5ed4782a-1955-419e-baa0-e05842cc559d@0@lo:735/0 lens 376/864 e 0 to 0 dl 1750367820 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0
LustreError: 16830:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cb626678: inode [0x200000403:0x1e5:0x0] mdc close failed: rc = -13
LustreError: 31473:0:(mdd_dir.c:4747:mdd_migrate_cmd_check()) lustre-MDD0002: '2' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush 2' to finish migration: rc = -1
LustreError: 31473:0:(mdd_dir.c:4747:mdd_migrate_cmd_check()) Skipped 1 previous similar message
Lustre: dir [0x240000404:0x11b:0x0] stripe 3 readdir failed: -2, directory is partially accessed!
LustreError: 22468:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000403:0x14f:0x0]: rc = -5
LustreError: 22468:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 1 previous similar message
LustreError: 22468:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 22468:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 1 previous similar message
Lustre: 27655:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000404:0xd8:0x0] with magic=0xbd60bd0
Lustre: 27655:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 31602:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0001: migrate [0x240000403:0x16e:0x0]/16 failed: rc = -2
LustreError: 31602:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 5 previous similar messages
LustreError: 24557:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000403:0x14f:0x0]: rc = -5
LustreError: 24557:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 1 previous similar message
LustreError: 24557:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 24557:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 1 previous similar message
Lustre: dir [0x240000404:0x11e:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 1 previous similar message
Link to test
racer test 1: racer on clients: centos-100.localnet DURATION=2700
LustreError: 32427:0:(mdd_dir.c:213:mdd_parent_fid()) ASSERTION( S_ISDIR(mdd_object_type(obj)) ) failed: lustre-MDD0000: FID [0x200000003:0xa:0x0] is not a directory type = 100000
LustreError: 32427:0:(mdd_dir.c:213:mdd_parent_fid()) LBUG
CPU: 12 PID: 32427 Comm: mdt_io00_008 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #2
Hardware name: Red Hat KVM, BIOS 1.16.0-3.module_el8.7.0+1218+f626c2ff 04/01/2014
Call Trace:
[<ffffffff817d93f8>] dump_stack+0x19/0x1b
[<ffffffffa0169afd>] lbug_with_loc+0x4d/0xa0 [libcfs]
[<ffffffffa11ee925>] mdd_parent_fid+0x395/0x3d0 [mdd]
[<ffffffffa11eed40>] mdd_is_parent+0xd0/0x1a0 [mdd]
[<ffffffffa11ef01c>] mdd_is_subdir+0x20c/0x250 [mdd]
[<ffffffffa12a72a6>] mdt_reint_rename+0xa56/0x3950 [mdt]
[<ffffffffa0610b47>] ? lustre_msg_add_version+0x27/0xa0 [ptlrpc]
[<ffffffffa12b3967>] mdt_reint_rec+0x87/0x240 [mdt]
[<ffffffffa128a25c>] mdt_reint_internal+0x73c/0xbb0 [mdt]
[<ffffffffa128e375>] ? mdt_thread_info_init+0xa5/0xc0 [mdt]
[<ffffffffa1291167>] mdt_reint+0x67/0x150 [mdt]
[<ffffffffa06e23ce>] tgt_request_handle+0x74e/0x1a60 [ptlrpc]
[<ffffffffa0620d87>] ptlrpc_server_handle_request+0x257/0xcd0 [ptlrpc]
[<ffffffffa0622b71>] ptlrpc_main+0xc61/0x1640 [ptlrpc]
[<ffffffff810dbb51>] ? put_prev_entity+0x31/0x400
[<ffffffffa0621f10>] ? ptlrpc_wait_event+0x630/0x630 [ptlrpc]
[<ffffffff810ba114>] kthread+0xe4/0xf0
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
[<ffffffff817ede5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
Lustre: 29177:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/3 is open, migrate only dentry
Lustre: 18940:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8802bb1b3740 x1832565951117824/t4294967726(0) o101->51b0d930-503f-4794-9315-8b2d6aa6d0a7@0@lo:197/0 lens 384/840 e 0 to 0 dl 1747671177 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0
LustreError: 15479:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) lustre-MDD0000: '17' migration was interrupted, run 'lfs migrate -m 1 -c 2 -H crush 17' to finish migration: rc = -1
LustreError: 15479:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x2:0x0]/17 failed: rc = -1
LustreError: 32147:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x2:0x0]/5 failed: rc = -2
Lustre: dir [0x200000404:0xd:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
Lustre: 30341:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x2:0x0]/4 is open, migrate only dentry
LustreError: 15479:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0001: migrate [0x200000403:0x4a:0x0]/16 failed: rc = -2
LustreError: 32762:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cb02c138: inode [0x200000404:0xc5:0x0] mdc close failed: rc = -2
Lustre: 29177:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0002: [0x280000403:0x1:0x0]/4 is open, migrate only dentry
Lustre: 29177:0:(mdd_dir.c:4822:mdd_migrate_object()) Skipped 3 previous similar messages
Lustre: dir [0x200000403:0x4a:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 2 previous similar messages
LustreError: 15479:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000404:0x55:0x0]/20 failed: rc = -2
LustreError: 2838:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cb02c138: inode [0x200000404:0xd6:0x0] mdc close failed: rc = -13
Lustre: dir [0x200000403:0x4a:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
ls (27426) used greatest stack depth: 10000 bytes left
LustreError: 4597:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802c8e02548: inode [0x200000403:0x4a:0x0] mdc close failed: rc = -2
Lustre: 15461:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8802a7aa1e40 x1832565955677184/t4294968974(0) o101->8e6de1fd-5abd-4195-897b-1dfad6890cdd@0@lo:236/0 lens 376/864 e 0 to 0 dl 1747671216 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0
Lustre: 29177:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/8 is open, migrate only dentry
Lustre: dir [0x240000404:0x51:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
LustreError: 32214:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0002: migrate [0x240000403:0x7b:0x0]/7 failed: rc = -2
Lustre: 27782:0:(lod_lov.c:1414:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x14e:0x0] with magic=0xbd60bd0
Lustre: 2119:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0002: [0x280000403:0x1:0x0]/5 is open, migrate only dentry
Lustre: 2119:0:(mdd_dir.c:4822:mdd_migrate_object()) Skipped 1 previous similar message
LustreError: 29177:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) lustre-MDD0002: '18' migration was interrupted, run 'lfs migrate -m 1 -c 3 -H crush 18' to finish migration: rc = -1
LustreError: 30341:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000403:0x160:0x0]/4 failed: rc = -116
LustreError: 30341:0:(mdt_reint.c:2540:mdt_reint_migrate()) Skipped 1 previous similar message
11[11033]: segfault at 8 ip 00007f501063c7e8 sp 00007ffe2613aac0 error 4 in ld-2.17.so[7f5010631000+22000]
LustreError: 12059:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802c8e02548: inode [0x200000404:0x199:0x0] mdc close failed: rc = -13
LustreError: 15201:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000404:0x8d:0x0]: rc = -5
LustreError: 15201:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 10577:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802c8e02548: inode [0x280000403:0x86:0x0] mdc close failed: rc = -2
Lustre: 32147:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x123:0x0]/18 is open, migrate only dentry
Lustre: 32147:0:(mdd_dir.c:4822:mdd_migrate_object()) Skipped 3 previous similar messages
LustreError: 32214:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) lustre-MDD0000: '15' migration was interrupted, run 'lfs migrate -m 1 -c 2 -H crush 15' to finish migration: rc = -1
LustreError: 32214:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x1:0x0]/15 failed: rc = -1
LustreError: 32214:0:(mdt_reint.c:2540:mdt_reint_migrate()) Skipped 1 previous similar message
Lustre: 7423:0:(lod_lov.c:1414:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000404:0x187:0x0] with magic=0xbd60bd0
Lustre: 7423:0:(lod_lov.c:1414:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 20036:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x293:0x0]: rc = -5
LustreError: 20036:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 1 previous similar message
LustreError: 20036:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 20036:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 1 previous similar message
LustreError: 1647:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) lustre-MDD0002: '15' migration was interrupted, run 'lfs migrate -m 1 -c 2 -H crush 15' to finish migration: rc = -1
Lustre: 10547:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0001: [0x240000403:0x6a:0x0]/13 is open, migrate only dentry
Lustre: 10547:0:(mdd_dir.c:4822:mdd_migrate_object()) Skipped 2 previous similar messages
Lustre: dir [0x200000404:0x189:0x0] stripe 3 readdir failed: -2, directory is partially accessed!
LustreError: 3909:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cb02c138: inode [0x200000403:0xb6:0x0] mdc close failed: rc = -2
LustreError: 15478:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) lustre-MDD0001: '14' migration was interrupted, run 'lfs migrate -m 2 -c 2 -H crush 14' to finish migration: rc = -1
LustreError: 28418:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000404:0x200:0x0]: rc = -5
LustreError: 28418:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 29762:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000403:0x1cf:0x0]: rc = -5
LustreError: 29762:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 7 previous similar messages
LustreError: 29762:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 29762:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 7 previous similar messages
LustreError: 1770:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0000: migrate [0x280000403:0x16c:0x0]/17 failed: rc = -2
LustreError: 1770:0:(mdt_reint.c:2540:mdt_reint_migrate()) Skipped 8 previous similar messages
Lustre: lustre-MDT0002: trigger partial OI scrub for RPC inconsistency, checking FID [0x280000403:0x16c:0x0]/0xa): rc = 0
LustreError: 29832:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0002: [0x280000403:0x16c:0x0] migrate mdt count mismatch 1 != 2
LustreError: 31275:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000403:0x2ee:0x0]: rc = -5
LustreError: 31275:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 7 previous similar messages
LustreError: 31275:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 31275:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 7 previous similar messages
10[30005]: segfault at 8 ip 00007f112cb9d7e8 sp 00007ffc104a3e00 error 4 in ld-2.17.so[7f112cb92000+22000]
LustreError: 32427:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) lustre-MDD0002: '13' migration was interrupted, run 'lfs migrate -m 1 -c 2 -H crush 13' to finish migration: rc = -1
Lustre: 1786:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/9 is open, migrate only dentry
Lustre: 1786:0:(mdd_dir.c:4822:mdd_migrate_object()) Skipped 5 previous similar messages
9[2386]: segfault at 8 ip 00007f27b03b97e8 sp 00007ffeb38b1000 error 4 in ld-2.17.so[7f27b03ae000+22000]
LustreError: 27194:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000403:0x2ad:0x0]: rc = -5
LustreError: 27194:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 2 previous similar messages
LustreError: 27194:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 27194:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 2 previous similar messages
LustreError: 30488:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 19 [0x240000403:0x234:0x0] inode@0000000000000000: rc = -5
LustreError: 9527:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802c8e02548: inode [0x200000404:0x44f:0x0] mdc close failed: rc = -13
LustreError: 9527:0:(file.c:247:ll_close_inode_openhandle()) Skipped 1 previous similar message
LustreError: 1786:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) lustre-MDD0002: '8' migration was interrupted, run 'lfs migrate -m 0 -c 1 -H crush 8' to finish migration: rc = -1
LustreError: 1786:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) Skipped 1 previous similar message
LustreError: 13262:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000403:0x26d:0x0]: rc = -5
LustreError: 13262:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 19 previous similar messages
LustreError: 13262:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 13262:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 19 previous similar messages
Lustre: 12758:0:(lod_lov.c:1414:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0x26e:0x0] with magic=0xbd60bd0
Lustre: 12758:0:(lod_lov.c:1414:lod_parse_striping()) Skipped 1 previous similar message
Lustre: 28076:0:(lod_lov.c:1414:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000403:0x368:0x0] with magic=0xbd60bd0
Lustre: 28076:0:(lod_lov.c:1414:lod_parse_striping()) Skipped 11 previous similar messages
Lustre: dir [0x240000403:0x232:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
Lustre: 1909:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8802856c9440 x1832565976246528/t4294979038(0) o101->8e6de1fd-5abd-4195-897b-1dfad6890cdd@0@lo:352/0 lens 376/840 e 0 to 0 dl 1747671332 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0
Lustre: 2012:0:(lod_lov.c:1414:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0x298:0x0] with magic=0xbd60bd0
Lustre: 2012:0:(lod_lov.c:1414:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 32128:0:(mdt_open.c:1302:mdt_cross_open()) lustre-MDT0001: [0x240000403:0x15d:0x0] doesn't exist!: rc = -14
10[25384]: segfault at 8 ip 00007fc85fc6a7e8 sp 00007ffddb2f7390 error 4 in ld-2.17.so[7fc85fc5f000+22000]
Lustre: 4748:0:(mdt_reint.c:2460:mdt_reint_migrate()) lustre-MDT0000: [0x200000403:0x1:0x0]/14 is open, migrate only dentry
LustreError: 21881:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) lustre-MDD0001: '16' migration was interrupted, run 'lfs migrate -m 0 -c 2 -H crush 16' to finish migration: rc = -1
LustreError: 21881:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) Skipped 2 previous similar messages
LustreError: 21881:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0001: migrate [0x280000403:0x1:0x0]/16 failed: rc = -1
LustreError: 21881:0:(mdt_reint.c:2540:mdt_reint_migrate()) Skipped 13 previous similar messages
LustreError: 17262:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff8802c8e02548: cannot apply new layout on [0x240000403:0x28c:0x0] : rc = -5
LustreError: 17262:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x240000403:0x28c:0x0] error -5.
LustreError: 25517:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff8802c8e02548: cannot apply new layout on [0x240000403:0x28c:0x0] : rc = -5
LustreError: 25517:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000403:0x28c:0x0]: rc = -5
LustreError: 25517:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 45 previous similar messages
LustreError: 25517:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 25517:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 45 previous similar messages
Lustre: 10113:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x2:0x0]/0 is open, migrate only dentry
Lustre: 10113:0:(mdd_dir.c:4822:mdd_migrate_object()) Skipped 22 previous similar messages
LustreError: 29964:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff8802c8e02548: cannot apply new layout on [0x240000403:0x28c:0x0] : rc = -5
Lustre: lustre-MDT0000: trigger partial OI scrub for RPC inconsistency, checking FID [0x200000403:0x28b:0x0]/0xa): rc = 0
Lustre: 7419:0:(lod_lov.c:1414:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x61d:0x0] with magic=0xbd60bd0
Lustre: 7419:0:(lod_lov.c:1414:lod_parse_striping()) Skipped 11 previous similar messages
10[1301]: segfault at 8 ip 00007fde586187e8 sp 00007ffca29a6b20 error 4 in ld-2.17.so[7fde5860d000+22000]
LustreError: 28626:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0000: [0x200000403:0x5d2:0x0] migrate mdt count mismatch 2 != 1
LustreError: 7341:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802c8e02548: inode [0x200000403:0x67f:0x0] mdc close failed: rc = -13
LustreError: 7341:0:(file.c:247:ll_close_inode_openhandle()) Skipped 1 previous similar message
Lustre: 32291:0:(lod_lov.c:1414:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x6ad:0x0] with magic=0xbd60bd0
Lustre: 32291:0:(lod_lov.c:1414:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 16832:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000403:0x794:0x0]: rc = -5
LustreError: 16832:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 55 previous similar messages
LustreError: 16832:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 16832:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 55 previous similar messages
LustreError: 32427:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) lustre-MDD0000: '17' migration was interrupted, run 'lfs migrate -m 2 -c 2 -H crush 17' to finish migration: rc = -1
LustreError: 32427:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) Skipped 6 previous similar messages
LustreError: 20977:0:(statahead.c:2447:start_statahead_thread()) lustre: unsupported statahead pattern 0X0.
LustreError: 30488:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 15 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
LustreError: 30488:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 2 previous similar messages
Lustre: mdt_io00_016: service thread pid 18643 was inactive for 72.191 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Pid: 18643, comm: mdt_io00_016 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x943/0xd80 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x2ea/0x810 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1b3/0x470 [mdt]
[<0>] mdt_object_lock+0x88/0x1c0 [mdt]
[<0>] mdt_rename_source_lock+0x57/0xf0 [mdt]
[<0>] mdt_reint_migrate+0x1832/0x24b0 [mdt]
[<0>] mdt_reint_rec+0x87/0x240 [mdt]
[<0>] mdt_reint_internal+0x73c/0xbb0 [mdt]
[<0>] mdt_reint+0x67/0x150 [mdt]
[<0>] tgt_request_handle+0x74e/0x1a60 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x257/0xcd0 [ptlrpc]
[<0>] ptlrpc_main+0xc61/0x1640 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: mdt_io00_018: service thread pid 4748 was inactive for 72.245 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Pid: 4748, comm: mdt_io00_018 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x943/0xd80 [ptlrpc]
[<0>] ldlm_cli_enqueue_fini+0xabc/0xfd0 [ptlrpc]
[<0>] ldlm_cli_enqueue+0x461/0xb00 [ptlrpc]
[<0>] osp_md_object_lock+0x151/0x2f0 [osp]
[<0>] lod_object_lock+0xdb/0x7c0 [lod]
[<0>] mdd_object_lock+0x2d/0xd0 [mdd]
[<0>] mdt_remote_object_lock_try+0x14c/0x189 [mdt]
[<0>] mdt_object_lock_internal+0x3c4/0x470 [mdt]
[<0>] mdt_rename_lock+0xd9/0x360 [mdt]
[<0>] mdt_reint_rename+0x144a/0x3950 [mdt]
[<0>] mdt_reint_rec+0x87/0x240 [mdt]
[<0>] mdt_reint_internal+0x73c/0xbb0 [mdt]
[<0>] mdt_reint+0x67/0x150 [mdt]
[<0>] tgt_request_handle+0x74e/0x1a60 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x257/0xcd0 [ptlrpc]
[<0>] ptlrpc_main+0xc61/0x1640 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Pid: 1786, comm: mdt_io00_012 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x943/0xd80 [ptlrpc]
[<0>] ldlm_cli_enqueue_fini+0xabc/0xfd0 [ptlrpc]
[<0>] ldlm_cli_enqueue+0x461/0xb00 [ptlrpc]
[<0>] osp_md_object_lock+0x151/0x2f0 [osp]
[<0>] lod_object_lock+0xdb/0x7c0 [lod]
[<0>] mdd_object_lock+0x2d/0xd0 [mdd]
[<0>] mdt_remote_object_lock_try+0x14c/0x189 [mdt]
[<0>] mdt_object_lock_internal+0x3c4/0x470 [mdt]
[<0>] mdt_rename_lock+0xd9/0x360 [mdt]
[<0>] mdt_reint_migrate+0x87e/0x24b0 [mdt]
[<0>] mdt_reint_rec+0x87/0x240 [mdt]
[<0>] mdt_reint_internal+0x73c/0xbb0 [mdt]
[<0>] mdt_reint+0x67/0x150 [mdt]
[<0>] tgt_request_handle+0x74e/0x1a60 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x257/0xcd0 [ptlrpc]
[<0>] ptlrpc_main+0xc61/0x1640 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: mdt_io00_008: service thread pid 32427 was inactive for 72.091 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: mdt_io00_019: service thread pid 10113 was inactive for 72.249 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 2 previous similar messages
Lustre: mdt_io00_015: service thread pid 10882 was inactive for 72.036 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 2 previous similar messages
Lustre: mdt_io00_000: service thread pid 15478 was inactive for 72.233 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 4 previous similar messages
Lustre: mdt_io00_022: service thread pid 30157 was inactive for 72.166 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 5 previous similar messages
LustreError: 15162:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0002_UUID lock: ffff88028496bc40/0x4bc8a069926f9a2 lrc: 3/0,0 mode: PR/PR res: [0x280000404:0x596:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4bc8a069926f986 expref: 414 pid: 2032 timeout: 499 lvb_type: 0
Lustre: 18643:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0002: [0x280000403:0x39b:0x0]/9 is open, migrate only dentry
Lustre: 18643:0:(mdd_dir.c:4822:mdd_migrate_object()) Skipped 15 previous similar messages
Lustre: mdt_io00_016: service thread pid 18643 completed after 106.523s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: lustre-MDT0002-mdc-ffff8802c8e02548: operation mds_getattr_lock to node 0@lo failed: rc = -107
Lustre: lustre-MDT0002-mdc-ffff8802c8e02548: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: lustre-MDT0002-mdc-ffff8802c8e02548: This client was evicted by lustre-MDT0002; in progress operations using this service will fail.
LustreError: 23744:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802c8e02548: inode [0x280000404:0x700:0x0] mdc close failed: rc = -108
LustreError: 23744:0:(file.c:247:ll_close_inode_openhandle()) Skipped 1 previous similar message
LustreError: 19053:0:(mdc_request.c:1469:mdc_read_page()) lustre-MDT0002-mdc-ffff8802c8e02548: [0x280000400:0x22:0x0] lock enqueue fails: rc = -108
Lustre: mdt_io00_012: service thread pid 1786 completed after 106.145s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: dir [0x280000404:0x6d0:0x0] stripe 0 readdir failed: -108, directory is partially accessed!
Lustre: Skipped 13 previous similar messages
Lustre: lustre-MDT0002-mdc-ffff8802c8e02548: Connection restored to (at 0@lo)
Lustre: mdt_io00_018: service thread pid 4748 completed after 106.177s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: 10208:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0001: migrate [0x280000404:0x70f:0x0]/0 failed: rc = -2
LustreError: 10208:0:(mdt_reint.c:2540:mdt_reint_migrate()) Skipped 21 previous similar messages
Lustre: mdt_io00_020: service thread pid 10208 completed after 105.141s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_008: service thread pid 32427 completed after 105.086s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_007: service thread pid 32214 completed after 103.410s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_017: service thread pid 21881 completed after 103.337s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_019: service thread pid 10113 completed after 103.634s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_004: service thread pid 29319 completed after 102.741s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_015: service thread pid 10882 completed after 101.696s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_009: service thread pid 1613 completed after 101.434s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_002: service thread pid 15480 completed after 101.075s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_010: service thread pid 1647 completed after 101.468s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_003: service thread pid 29177 completed after 100.566s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_000: service thread pid 15478 completed after 100.361s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_001: service thread pid 15479 completed after 99.454s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_011: service thread pid 1770 completed after 99.185s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_014: service thread pid 10547 completed after 98.739s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_005: service thread pid 30341 completed after 98.897s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_021: service thread pid 30073 completed after 101.738s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_022: service thread pid 30157 completed after 98.627s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_023: service thread pid 31094 completed after 97.617s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: 1770:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) lustre-MDD0002: '7' migration was interrupted, run 'lfs migrate -m 1 -c 2 -H crush 7' to finish migration: rc = -1
LustreError: 1770:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) Skipped 2 previous similar messages
LustreError: 19687:0:(statahead.c:2399:start_statahead_thread()) lustre: invalid pattern 0X0.
LustreError: 27890:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000403:0x1ea:0x0]: rc = -5
LustreError: 27890:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 11 previous similar messages
LustreError: 27890:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 27890:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 11 previous similar messages
LustreError: 28860:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff8802c8e02548: cannot apply new layout on [0x280000403:0x1cf:0x0] : rc = -5
LustreError: 28860:0:(lov_object.c:1348:lov_layout_change()) Skipped 1 previous similar message
LustreError: 28860:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x280000403:0x1cf:0x0] error -5.
Lustre: lustre-MDT0001: trigger partial OI scrub for RPC inconsistency, checking FID [0x240000404:0x757:0x0]/0xa): rc = 0
LustreError: 23577:0:(osd_index.c:204:__osd_xattr_load_by_oid()) lustre-MDT0001: can't get bonus, rc = -2
LustreError: 25333:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff8802c8e02548: cannot apply new layout on [0x280000403:0x1cf:0x0] : rc = -5
Lustre: 1963:0:(lod_lov.c:1414:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x701:0x0] with magic=0xbd60bd0
Lustre: 1963:0:(lod_lov.c:1414:lod_parse_striping()) Skipped 15 previous similar messages
Lustre: 29177:0:(mdt_reint.c:2460:mdt_reint_migrate()) lustre-MDT0002: [0x280000403:0x1:0x0]/5 is open, migrate only dentry
9[540]: segfault at 8 ip 00007fa893d3d7e8 sp 00007ffd87a10190 error 4 in ld-2.17.so[7fa893d32000+22000]
LustreError: 4714:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff8802cb02c138: cannot apply new layout on [0x200000404:0xc8c:0x0] : rc = -5
LustreError: 4714:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x200000404:0xc8c:0x0] error -5.
LustreError: 2030:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0002: [0x280000405:0x31e:0x0] migrate mdt count mismatch 2 != 3
13[25532]: segfault at 8 ip 00007f69f84597e8 sp 00007ffc691d8f60 error 4 in ld-2.17.so[7f69f844e000+22000]
13[22656]: segfault at 8 ip 00007f1a5d4f27e8 sp 00007ffcdb87f130 error 4 in ld-2.17.so[7f1a5d4e7000+22000]
LustreError: 27810:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0002: [0x280000403:0x16c:0x0] migrate mdt count mismatch 1 != 3
LustreError: 27810:0:(mdt_xattr.c:402:mdt_dir_layout_update()) Skipped 1 previous similar message
13[25576]: segfault at 8 ip 00007fa8061517e8 sp 00007ffc9c955d90 error 4 in ld-2.17.so[7fa806146000+22000]
7[29448]: segfault at 8 ip 00007fa1555a47e8 sp 00007ffc5a324c10 error 4 in ld-2.17.so[7fa155599000+22000]
LustreError: 28626:0:(mdt_open.c:1302:mdt_cross_open()) lustre-MDT0000: [0x200000404:0x590:0x0] doesn't exist!: rc = -14
4[28765]: segfault at 0 ip (null) sp 00007ffe964c2cc8 error 14 in 4[400000+6000]
LustreError: 3861:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0000: [0x200000404:0xe25:0x0] migrate mdt count mismatch 1 != 3
Lustre: 1923:0:(lod_lov.c:1414:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000405:0x4c9:0x0] with magic=0xbd60bd0
Lustre: 1923:0:(lod_lov.c:1414:lod_parse_striping()) Skipped 25 previous similar messages
LustreError: 6184:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cb02c138: inode [0x200000404:0xe54:0x0] mdc close failed: rc = -13
LustreError: 6184:0:(file.c:247:ll_close_inode_openhandle()) Skipped 29 previous similar messages
LustreError: 3825:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0002: [0x280000403:0x16c:0x0] migrate mdt count mismatch 1 != 3
LustreError: 30488:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 19 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
LustreError: 30488:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 1 previous similar message
LustreError: 188:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 19 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
LustreError: 188:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 1 previous similar message
LustreError: 192:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 19 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
LustreError: 192:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 3 previous similar messages
LustreError: 191:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 19 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
LustreError: 191:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 35 previous similar messages
LustreError: 30488:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 19 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
LustreError: 30488:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 70 previous similar messages
Lustre: mdt00_033: service thread pid 29776 was inactive for 72.280 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 1 previous similar message
LustreError: 15925:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000404:0xccd:0x0]: rc = -5
LustreError: 15925:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 1172 previous similar messages
LustreError: 15925:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 15925:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 1172 previous similar messages
LustreError: 15162:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802cceec3c0/0x4bc8a0699440784 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0xfc8:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4bc8a0699440753 expref: 557 pid: 3771 timeout: 773 lvb_type: 0
Lustre: mdt00_033: service thread pid 29776 completed after 100.538s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt00_068: service thread pid 7425 completed after 100.154s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: lustre-MDT0000-mdc-ffff8802cb02c138: operation mds_getattr to node 0@lo failed: rc = -107
LustreError: Skipped 1 previous similar message
Lustre: lustre-MDT0000-mdc-ffff8802cb02c138: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: lustre-MDT0000-mdc-ffff8802cb02c138: This client was evicted by lustre-MDT0000; in progress operations using this service will fail.
LustreError: 6960:0:(file.c:6143:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000404:0xfc8:0x0] error: rc = -108
Lustre: 21881:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0002: [0x280000403:0xc46:0x0]/16 is open, migrate only dentry
Lustre: 21881:0:(mdd_dir.c:4822:mdd_migrate_object()) Skipped 45 previous similar messages
Lustre: lustre-MDT0000-mdc-ffff8802cb02c138: Connection restored to (at 0@lo)
LustreError: 31094:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0000: migrate [0x240000403:0xc0a:0x0]/3 failed: rc = -2
LustreError: 31094:0:(mdt_reint.c:2540:mdt_reint_migrate()) Skipped 38 previous similar messages
LustreError: 32307:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0001: unable to read [0x240000403:0xe90:0x0] ACL: rc = -2
Lustre: dir [0x240000403:0xc0a:0x0] stripe 3 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 1 previous similar message
Lustre: lustre-OST0000-osc-ffff8802c8e02548: disconnect after 20s idle
INFO: task ls:20996 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff8802b004d2c0 11280 20996 27173 0x00000080
Call Trace:
[<ffffffff817e19d9>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817df7ea>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817dfa1d>] mutex_lock+0x2d/0x40
[<ffffffff81255126>] do_last+0x296/0x1280
[<ffffffff8125475e>] ? link_path_walk+0x27e/0x8c0
[<ffffffff8125712d>] path_openat+0xcd/0x5b0
[<ffffffff81258e7d>] do_filp_open+0x4d/0xb0
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffff817e324e>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81267913>] ? __alloc_fd+0xc3/0x170
[<ffffffff81244864>] do_sys_open+0x124/0x220
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff81244994>] SyS_openat+0x14/0x20
[<ffffffff817ee00c>] system_call_fastpath+0x1f/0x24
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
INFO: task ls:20997 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff8802b4634240 11280 20997 27173 0x00000080
Call Trace:
[<ffffffff817e19d9>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817df7ea>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817dfa1d>] mutex_lock+0x2d/0x40
[<ffffffff81255126>] do_last+0x296/0x1280
[<ffffffff8125475e>] ? link_path_walk+0x27e/0x8c0
[<ffffffff8125712d>] path_openat+0xcd/0x5b0
[<ffffffff81258e7d>] do_filp_open+0x4d/0xb0
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffff817e324e>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81267913>] ? __alloc_fd+0xc3/0x170
[<ffffffff81244864>] do_sys_open+0x124/0x220
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff81244994>] SyS_openat+0x14/0x20
[<ffffffff817ee00c>] system_call_fastpath+0x1f/0x24
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
INFO: task ls:21001 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff88008f4a3d58 11280 21001 27173 0x00000080
Call Trace:
[<ffffffff817e19d9>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817df7ea>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817dfa1d>] mutex_lock+0x2d/0x40
[<ffffffff81255126>] do_last+0x296/0x1280
[<ffffffff8125475e>] ? link_path_walk+0x27e/0x8c0
[<ffffffff8125712d>] path_openat+0xcd/0x5b0
[<ffffffff81258e7d>] do_filp_open+0x4d/0xb0
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffff817e324e>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81267913>] ? __alloc_fd+0xc3/0x170
[<ffffffff81244864>] do_sys_open+0x124/0x220
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff81244994>] SyS_openat+0x14/0x20
[<ffffffff817ee00c>] system_call_fastpath+0x1f/0x24
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
INFO: task ls:21022 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff8802c4798040 11072 21022 27123 0x00000080
Call Trace:
[<ffffffff817e19d9>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817df7ea>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817dfa1d>] mutex_lock+0x2d/0x40
[<ffffffff81255126>] do_last+0x296/0x1280
[<ffffffff8125475e>] ? link_path_walk+0x27e/0x8c0
[<ffffffff8125712d>] path_openat+0xcd/0x5b0
[<ffffffff81258e7d>] do_filp_open+0x4d/0xb0
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffff817e324e>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81267913>] ? __alloc_fd+0xc3/0x170
[<ffffffff81244864>] do_sys_open+0x124/0x220
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff81244994>] SyS_openat+0x14/0x20
[<ffffffff817ee00c>] system_call_fastpath+0x1f/0x24
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
INFO: task ls:21063 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff880093452ae8 11008 21063 27199 0x00000080
Call Trace:
[<ffffffff817e19d9>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817df7ea>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817dfa1d>] mutex_lock+0x2d/0x40
[<ffffffff81255126>] do_last+0x296/0x1280
[<ffffffff8125475e>] ? link_path_walk+0x27e/0x8c0
[<ffffffff8125712d>] path_openat+0xcd/0x5b0
[<ffffffff81258e7d>] do_filp_open+0x4d/0xb0
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffff817e324e>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81267913>] ? __alloc_fd+0xc3/0x170
[<ffffffff81244864>] do_sys_open+0x124/0x220
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff81244994>] SyS_openat+0x14/0x20
[<ffffffff817ee00c>] system_call_fastpath+0x1f/0x24
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
INFO: task ls:21066 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff880095efbd58 11280 21066 27199 0x00000080
Call Trace:
[<ffffffff817e19d9>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817df7ea>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817dfa1d>] mutex_lock+0x2d/0x40
[<ffffffff81255126>] do_last+0x296/0x1280
[<ffffffff8125475e>] ? link_path_walk+0x27e/0x8c0
[<ffffffff8125712d>] path_openat+0xcd/0x5b0
[<ffffffff81258e7d>] do_filp_open+0x4d/0xb0
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffff817e324e>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81267913>] ? __alloc_fd+0xc3/0x170
[<ffffffff81244864>] do_sys_open+0x124/0x220
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff81244994>] SyS_openat+0x14/0x20
[<ffffffff817ee00c>] system_call_fastpath+0x1f/0x24
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
INFO: task ls:21067 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff880095efe238 11280 21067 27199 0x00000080
Call Trace:
[<ffffffff817e19d9>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817df7ea>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817dfa1d>] mutex_lock+0x2d/0x40
[<ffffffff81255126>] do_last+0x296/0x1280
[<ffffffff8125475e>] ? link_path_walk+0x27e/0x8c0
[<ffffffff8125712d>] path_openat+0xcd/0x5b0
[<ffffffff81258e7d>] do_filp_open+0x4d/0xb0
[<ffffffff814119f9>] ? do_raw_spin_unlock+0x49/0x90
[<ffffffff817e324e>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81267913>] ? __alloc_fd+0xc3/0x170
[<ffffffff81244864>] do_sys_open+0x124/0x220
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff81244994>] SyS_openat+0x14/0x20
[<ffffffff817ee00c>] system_call_fastpath+0x1f/0x24
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
INFO: task mrename:8870 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
mrename D ffff8802ab7ae238 11696 8870 26831 0x00000080
Call Trace:
[<ffffffff817e19d9>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817df7ea>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817dfa1d>] mutex_lock+0x2d/0x40
[<ffffffff81252061>] lock_rename+0x31/0xe0
[<ffffffff812586af>] SYSC_renameat2+0x22f/0x570
[<ffffffff811ed462>] ? handle_mm_fault+0xc2/0x150
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff812597de>] SyS_renameat2+0xe/0x10
[<ffffffff8125981e>] SyS_rename+0x1e/0x20
[<ffffffff817ee00c>] system_call_fastpath+0x1f/0x24
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
INFO: task rm:10254 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
rm D ffff88008daca4f0 11008 10254 27112 0x00000080
Call Trace:
[<ffffffff817e19d9>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817df7ea>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817dfa1d>] mutex_lock+0x2d/0x40
[<ffffffff81258135>] do_rmdir+0x165/0x200
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff81259385>] SyS_unlinkat+0x25/0x40
[<ffffffff817ee00c>] system_call_fastpath+0x1f/0x24
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
INFO: task mrename:10358 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
mrename D ffff8803253e3760 10928 10358 27147 0x00000080
Call Trace:
[<ffffffff817e19d9>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817df7ea>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817dfa1d>] mutex_lock+0x2d/0x40
[<ffffffff81252061>] lock_rename+0x31/0xe0
[<ffffffff812586af>] SYSC_renameat2+0x22f/0x570
[<ffffffff811ed462>] ? handle_mm_fault+0xc2/0x150
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
[<ffffffff817edf49>] ? system_call_after_swapgs+0x96/0x13a
[<ffffffff812597de>] SyS_renameat2+0xe/0x10
[<ffffffff8125981e>] SyS_rename+0x1e/0x20
[<ffffffff817ee00c>] system_call_fastpath+0x1f/0x24
[<ffffffff817edf55>] ? system_call_after_swapgs+0xa2/0x13a
LustreError: 15162:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0001_UUID lock: ffff8802a699b100/0x4bc8a06994b6785 lrc: 3/0,0 mode: PR/PR res: [0x240000403:0x1b75:0x0].0x0 bits 0x1b/0x0 rrc: 6 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4bc8a06994b670e expref: 414 pid: 3813 timeout: 879 lvb_type: 0
Lustre: 1647:0:(mdt_reint.c:2460:mdt_reint_migrate()) lustre-MDT0001: [0x240000403:0x1:0x0]/2 is open, migrate only dentry
LustreError: lustre-MDT0001-mdc-ffff8802c8e02548: operation ldlm_enqueue to node 0@lo failed: rc = -107
Lustre: lustre-MDT0001-mdc-ffff8802c8e02548: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: lustre-MDT0001-mdc-ffff8802c8e02548: This client was evicted by lustre-MDT0001; in progress operations using this service will fail.
LustreError: 19462:0:(file.c:6143:ll_inode_revalidate_fini()) lustre: revalidate FID [0x240000403:0x1b75:0x0] error: rc = -5
LustreError: 19462:0:(file.c:6143:ll_inode_revalidate_fini()) Skipped 6 previous similar messages
Lustre: lustre-MDT0001-mdc-ffff8802c8e02548: Connection restored to (at 0@lo)
LustreError: 15162:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802ce46a980/0x4bc8a06994c6863 lrc: 3/0,0 mode: PR/PR res: [0x200000405:0x43:0x0].0x0 bits 0x1b/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4bc8a06994c6840 expref: 336 pid: 2002 timeout: 982 lvb_type: 0
LustreError: lustre-MDT0000-mdc-ffff8802c8e02548: operation mds_close to node 0@lo failed: rc = -107
Lustre: lustre-MDT0000-mdc-ffff8802c8e02548: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: lustre-MDT0000-mdc-ffff8802c8e02548: This client was evicted by lustre-MDT0000; in progress operations using this service will fail.
LustreError: 17817:0:(llite_lib.c:2023:ll_md_setattr()) md_setattr fails: rc = -5
LustreError: 17817:0:(llite_lib.c:2023:ll_md_setattr()) Skipped 1 previous similar message
LustreError: 15161:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802c8e02548: inode [0x200000403:0x771:0x0] mdc close failed: rc = -5
LustreError: 15161:0:(file.c:247:ll_close_inode_openhandle()) Skipped 43 previous similar messages
LustreError: 19796:0:(file.c:6143:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000405:0x43:0x0] error: rc = -108
Lustre: lustre-MDT0000-mdc-ffff8802c8e02548: Connection restored to (at 0@lo)
LustreError: 32214:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) lustre-MDD0002: '14' migration was interrupted, run 'lfs migrate -m 2 -c 1 -H crush 14' to finish migration: rc = -1
LustreError: 32214:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) Skipped 15 previous similar messages
Lustre: 32108:0:(lod_lov.c:1414:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000403:0x2d32:0x0] with magic=0xbd60bd0
Lustre: 32108:0:(lod_lov.c:1414:lod_parse_striping()) Skipped 19 previous similar messages
Lustre: dir [0x240000403:0x2d63:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 1 previous similar message
LustreError: 13343:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 10 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
LustreError: 13343:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 72 previous similar messages
15[7410]: segfault at 8 ip 00007fde625a57e8 sp 00007ffca421ce10 error 4 in ld-2.17.so[7fde6259a000+22000]
LustreError: 3825:0:(mdt_open.c:1302:mdt_cross_open()) lustre-MDT0000: [0x200000404:0x590:0x0] doesn't exist!: rc = -14
LustreError: 19684:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff8802cb02c138: cannot apply new layout on [0x200000406:0x1b7:0x0] : rc = -5
LustreError: 19684:0:(lov_object.c:1348:lov_layout_change()) Skipped 1 previous similar message
LustreError: 19684:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x200000406:0x1b7:0x0] error -5.
LustreError: 29837:0:(mdt_open.c:1302:mdt_cross_open()) lustre-MDT0000: [0x200000404:0x590:0x0] doesn't exist!: rc = -14
LustreError: 15162:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff88029e16bc40/0x4bc8a06996418fc lrc: 3/0,0 mode: PR/PR res: [0x200000405:0x437:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4bc8a06996418e0 expref: 288 pid: 29776 timeout: 1218 lvb_type: 0
LustreError: lustre-MDT0000-mdc-ffff8802c8e02548: operation mds_getattr to node 0@lo failed: rc = -107
Lustre: lustre-MDT0000-mdc-ffff8802c8e02548: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: lustre-MDT0000-mdc-ffff8802c8e02548: This client was evicted by lustre-MDT0000; in progress operations using this service will fail.
LustreError: 14364:0:(file.c:6143:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108
LustreError: 14364:0:(file.c:6143:ll_inode_revalidate_fini()) Skipped 1 previous similar message
LustreError: 14363:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff8802c8e02548: namespace resource [0x200000405:0x437:0x0].0x0 (ffff88028f9611c0) refcount nonzero (1) after lock cleanup; forcing cleanup.
Lustre: lustre-MDT0000-mdc-ffff8802c8e02548: Connection restored to (at 0@lo)
LustreError: 15162:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0001_UUID lock: ffff880277ee5680/0x4bc8a06996755fe lrc: 3/0,0 mode: PR/PR res: [0x240000403:0x2f07:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4bc8a06996755b8 expref: 325 pid: 2708 timeout: 1244 lvb_type: 0
LustreError: 7423:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export ffff8800a0fc12a8 ns: mdt-lustre-MDT0001_UUID lock: ffff8802add90400/0x4bc8a069967cd36 lrc: 3/0,0 mode: PR/PR res: [0x240000400:0x45:0x0].0x0 bits 0x12/0x0 rrc: 12 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x4bc8a069967cd0c expref: 14 pid: 7423 timeout: 0 lvb_type: 0
LustreError: lustre-MDT0001-mdc-ffff8802cb02c138: operation ldlm_enqueue to node 0@lo failed: rc = -107
Lustre: lustre-MDT0001-mdc-ffff8802cb02c138: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: lustre-MDT0001-mdc-ffff8802cb02c138: This client was evicted by lustre-MDT0001; in progress operations using this service will fail.
LustreError: 19353:0:(file.c:6143:ll_inode_revalidate_fini()) lustre: revalidate FID [0x240000403:0x2dfe:0x0] error: rc = -5
LustreError: 19353:0:(file.c:6143:ll_inode_revalidate_fini()) Skipped 2 previous similar messages
Lustre: lustre-MDT0001-mdc-ffff8802cb02c138: Connection restored to (at 0@lo)
LustreError: 16405:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff8802cb02c138: cannot apply new layout on [0x200000406:0x1b7:0x0] : rc = -5
LustreError: 16405:0:(lov_object.c:1348:lov_layout_change()) Skipped 1 previous similar message
Lustre: 29776:0:(lod_lov.c:1414:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000405:0xc99:0x0] with magic=0xbd60bd0
Lustre: 29776:0:(lod_lov.c:1414:lod_parse_striping()) Skipped 79 previous similar messages
Lustre: dir [0x240000406:0x29:0x0] stripe 3 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 13 previous similar messages
Lustre: 30157:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0001: [0x200000403:0x2:0x0]/9 is open, migrate only dentry
Lustre: 30157:0:(mdd_dir.c:4822:mdd_migrate_object()) Skipped 40 previous similar messages
LustreError: 32427:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0000: migrate [0x280000405:0xbda:0x0]/19 failed: rc = -2
LustreError: 32427:0:(mdt_reint.c:2540:mdt_reint_migrate()) Skipped 35 previous similar messages
LustreError: 27110:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000403:0x302a:0x0]: rc = -5
LustreError: 27110:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 75 previous similar messages
LustreError: 27110:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 27110:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 75 previous similar messages
LustreError: 29861:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0001: unable to read [0x240000406:0x7f:0x0] ACL: rc = -2
LustreError: 12380:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff8802c8e02548: cannot apply new layout on [0x200000405:0x5f8:0x0] : rc = -5
LustreError: 12380:0:(lov_object.c:1348:lov_layout_change()) Skipped 3 previous similar messages
LustreError: 12380:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x200000405:0x5f8:0x0] error -5.
2[21055]: segfault at 46474e550 ip 00007f3eb45f50cc sp 00007ffd6eb86678 error 4 in ld-2.17.so[7f3eb45ea000+22000]
LustreError: 32309:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000405:0x810:0x0] ACL: rc = -2
0[22890]: segfault at 8 ip 00007ff717a537e8 sp 00007ffe7da04ff0 error 4 in ld-2.17.so[7ff717a48000+22000]
LustreError: 29928:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x200000405:0x5f8:0x0] error -5.
LustreError: 21881:0:(lustre_lmv.h:500:lmv_is_sane()) unknown layout LMV: magic=0xcd40cd0 count=4 index=3 hash=crush:0x82000003 version=1 migrate_offset=3 migrate_hash=fnv_1a_64:2 pool=
LustreError: 15466:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000406:0x29:0x0]: rc = -2
LustreError: 28076:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0002: [0x280000403:0x13fe:0x0] migrate mdt count mismatch 3 != 1
LustreError: 29757:0:(mdt_open.c:1302:mdt_cross_open()) lustre-MDT0000: [0x200000404:0x590:0x0] doesn't exist!: rc = -14
6[11365]: segfault at 8 ip 00007f654a5377e8 sp 00007fff59d1ee00 error 4 in ld-2.17.so[7f654a52c000+22000]
LustreError: 29848:0:(mdt_open.c:1302:mdt_cross_open()) lustre-MDT0000: [0x200000404:0x590:0x0] doesn't exist!: rc = -14
LustreError: 13343:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 20 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
LustreError: 13343:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 1 previous similar message
LustreError: 7939:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x280000403:0x1369:0x0] error -5.
LustreError: 15162:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802d7a33c40/0x4bc8a069981f397 lrc: 3/0,0 mode: PR/PR res: [0x200000405:0xa29:0x0].0x0 bits 0x1b/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4bc8a069981f37b expref: 438 pid: 2002 timeout: 1526 lvb_type: 0
LustreError: lustre-MDT0000-mdc-ffff8802cb02c138: operation mds_close to node 0@lo failed: rc = -107
LustreError: Skipped 1 previous similar message
Lustre: lustre-MDT0000-mdc-ffff8802cb02c138: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: lustre-MDT0000-mdc-ffff8802cb02c138: This client was evicted by lustre-MDT0000; in progress operations using this service will fail.
LustreError: 21974:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cb02c138: inode [0x200000405:0xa29:0x0] mdc close failed: rc = -5
LustreError: 21974:0:(file.c:247:ll_close_inode_openhandle()) Skipped 50 previous similar messages
LustreError: 20457:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x200000405:0xa29:0x0] error -108.
LustreError: 20553:0:(file.c:6143:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108
LustreError: 20553:0:(file.c:6143:ll_inode_revalidate_fini()) Skipped 1 previous similar message
Lustre: lustre-MDT0000-mdc-ffff8802cb02c138: Connection restored to (at 0@lo)
LustreError: 31094:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) lustre-MDD0002: '0' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush 0' to finish migration: rc = -1
LustreError: 31094:0:(mdd_dir.c:4743:mdd_migrate_cmd_check()) Skipped 18 previous similar messages
LustreError: 12757:0:(mdt_open.c:1302:mdt_cross_open()) lustre-MDT0000: [0x200000404:0x590:0x0] doesn't exist!: rc = -14
14[31809]: segfault at 8 ip 00007f9dc53817e8 sp 00007ffed3840160 error 4 in ld-2.17.so[7f9dc5376000+22000]
LustreError: 3288:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff8802c8e02548: cannot apply new layout on [0x280000403:0x1369:0x0] : rc = -5
LustreError: 3288:0:(lov_object.c:1348:lov_layout_change()) Skipped 6 previous similar messages
LustreError: 13684:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000405:0xf15:0x0]: rc = -2
LustreError: 13684:0:(mdd_object.c:3901:mdd_close()) Skipped 1 previous similar message
Lustre: dir [0x280000403:0x166c:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 15 previous similar messages
LustreError: 2028:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0002: [0x280000403:0x1656:0x0] migrate mdt count mismatch 1 != 3
LustreError: 12758:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000408:0x35d:0x0] ACL: rc = -2
LustreError: 15162:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0001_UUID lock: ffff8802c7246580/0x4bc8a06998d62cb lrc: 3/0,0 mode: PR/PR res: [0x240000405:0x130f:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4bc8a06998d62a8 expref: 272 pid: 7436 timeout: 1700 lvb_type: 0
LustreError: lustre-MDT0001-mdc-ffff8802cb02c138: operation mds_reint to node 0@lo failed: rc = -107
Lustre: lustre-MDT0001-mdc-ffff8802cb02c138: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: lustre-MDT0001-mdc-ffff8802cb02c138: This client was evicted by lustre-MDT0001; in progress operations using this service will fail.
LustreError: 23615:0:(llite_lib.c:2023:ll_md_setattr()) md_setattr fails: rc = -5
LustreError: 23615:0:(llite_lib.c:2023:ll_md_setattr()) Skipped 1 previous similar message
LustreError: 22536:0:(file.c:6143:ll_inode_revalidate_fini()) lustre: revalidate FID [0x240000405:0x130f:0x0] error: rc = -108
LustreError: 22536:0:(file.c:6143:ll_inode_revalidate_fini()) Skipped 3 previous similar messages
Lustre: lustre-MDT0001-mdc-ffff8802cb02c138: Connection restored to (at 0@lo)
LustreError: 190:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 17 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
LustreError: 190:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 3 previous similar messages
LustreError: 2002:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000407:0x7e0:0x0] ACL: rc = -2
LustreError: 2002:0:(mdt_handler.c:746:mdt_pack_acl2body()) Skipped 1 previous similar message
1[3808]: segfault at 8 ip 00007f117fba37e8 sp 00007fff6bb44910 error 4 in ld-2.17.so[7f117fb98000+22000]
LustreError: 29861:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0001: [0x240000405:0x12b0:0x0] migrate mdt count mismatch 3 != 2
Lustre: 28235:0:(lod_lov.c:1414:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000405:0x14d3:0x0] with magic=0xbd60bd0
Lustre: 28235:0:(lod_lov.c:1414:lod_parse_striping()) Skipped 109 previous similar messages
LustreError: 26765:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff8802cb02c138: cannot apply new layout on [0x240000405:0x107c:0x0] : rc = -5
LustreError: 26765:0:(lov_object.c:1348:lov_layout_change()) Skipped 7 previous similar messages
LustreError: 15162:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0001_UUID lock: ffff880284adb880/0x4bc8a069997b2b2 lrc: 3/0,0 mode: PR/PR res: [0x240000407:0xf4:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4bc8a069997b296 expref: 353 pid: 32251 timeout: 1868 lvb_type: 0
LustreError: 32373:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1747672908 with bad export cookie 341299432719657980
LustreError: lustre-MDT0001-mdc-ffff8802c8e02548: operation mds_reint to node 0@lo failed: rc = -107
Lustre: lustre-MDT0001-mdc-ffff8802c8e02548: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: lustre-MDT0001-mdc-ffff8802c8e02548: This client was evicted by lustre-MDT0001; in progress operations using this service will fail.
Lustre: lustre-MDT0001-mdc-ffff8802c8e02548: Connection restored to (at 0@lo)
LustreError: 3389:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x200000407:0x1a92:0x0] error -5.
LustreError: 3389:0:(vvp_io.c:1905:vvp_io_init()) Skipped 2 previous similar messages
Lustre: 29319:0:(mdd_dir.c:4822:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/6 is open, migrate only dentry
Lustre: 29319:0:(mdd_dir.c:4822:mdd_migrate_object()) Skipped 58 previous similar messages
LustreError: 10208:0:(mdt_reint.c:2540:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000408:0x72e:0x0]/19 failed: rc = -2
LustreError: 10208:0:(mdt_reint.c:2540:mdt_reint_migrate()) Skipped 49 previous similar messages
LustreError: 26787:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000407:0x1a29:0x0]: rc = -5
LustreError: 26787:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 156 previous similar messages
LustreError: 26787:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 26787:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 156 previous similar messages
LustreError: 32108:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0001: [0x240000406:0x723:0x0] migrate mdt count mismatch 3 != 1
6[3879]: segfault at 0 ip 0000000000403e5f sp 00007ffdd49dfcc0 error 6 in 6[400000+6000]
LustreError: 7366:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0000: [0x200000407:0x1e94:0x0] migrate mdt count mismatch 2 != 3
19[18451]: segfault at 8 ip 00007f52fa36c7e8 sp 00007fffeae70660 error 4 in ld-2.17.so[7f52fa361000+22000]
12[18714]: segfault at 8 ip 00007f8cce7d07e8 sp 00007fff8f817f80 error 4 in ld-2.17.so[7f8cce7c5000+22000]
LustreError: 14142:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 9 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
LustreError: 14142:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 2 previous similar messages
3[31594]: segfault at 8 ip 00007fcc041ff7e8 sp 00007ffd8b80f4c0 error 4 in ld-2.17.so[7fcc041f4000+22000]
LustreError: 23138:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x240000407:0xd99:0x0] error -5.
LustreError: 23138:0:(vvp_io.c:1905:vvp_io_init()) Skipped 4 previous similar messages
LustreError: 12746:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0001: [0x240000407:0xd92:0x0] migrate mdt count mismatch 3 != 1
LustreError: 27810:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000408:0xd2b:0x0] ACL: rc = -2
LustreError: 27810:0:(mdt_handler.c:746:mdt_pack_acl2body()) Skipped 1 previous similar message
Link to test
racer test 1: racer on clients: centos-55.localnet DURATION=2700
LustreError: 27643:0:(mdd_dir.c:213:mdd_parent_fid()) ASSERTION( S_ISDIR(mdd_object_type(obj)) ) failed: lustre-MDD0000: FID [0x200000003:0xa:0x0] is not a directory type = 100000
LustreError: 27643:0:(mdd_dir.c:213:mdd_parent_fid()) LBUG
CPU: 8 PID: 27643 Comm: mdt_io00_004 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #2
Hardware name: Red Hat KVM, BIOS 1.16.0-3.module_el8.7.0+1218+f626c2ff 04/01/2014
Call Trace:
[<ffffffff817d93f8>] dump_stack+0x19/0x1b
[<ffffffffa016aafd>] lbug_with_loc+0x4d/0xa0 [libcfs]
[<ffffffffa11e5667>] mdd_parent_fid+0x3d7/0x3e0 [mdd]
[<ffffffffa11e5a50>] mdd_is_parent+0xd0/0x1a0 [mdd]
[<ffffffffa11e5d2c>] mdd_is_subdir+0x20c/0x250 [mdd]
[<ffffffffa129afd2>] mdt_reint_rename+0xfe2/0x2bf0 [mdt]
[<ffffffffa060db47>] ? lustre_msg_add_version+0x27/0xa0 [ptlrpc]
[<ffffffffa03e11de>] ? lu_ucred+0x1e/0x30 [obdclass]
[<ffffffffa128f745>] ? mdt_ucred+0x15/0x20 [mdt]
[<ffffffffa12a6347>] mdt_reint_rec+0x87/0x240 [mdt]
[<ffffffffa127d95f>] mdt_reint_internal+0x84f/0x13d0 [mdt]
[<ffffffffa1282185>] ? mdt_thread_info_init+0xa5/0xc0 [mdt]
[<ffffffffa1284f77>] mdt_reint+0x67/0x150 [mdt]
[<ffffffffa06df6ee>] tgt_request_handle+0x74e/0x1a60 [ptlrpc]
[<ffffffffa061dd87>] ptlrpc_server_handle_request+0x257/0xcd0 [ptlrpc]
[<ffffffffa061fb71>] ptlrpc_main+0xc61/0x1640 [ptlrpc]
[<ffffffff810dbb51>] ? put_prev_entity+0x31/0x400
[<ffffffffa061ef10>] ? ptlrpc_wait_event+0x630/0x630 [ptlrpc]
[<ffffffff810ba114>] kthread+0xe4/0xf0
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
[<ffffffff817ede5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
cp (28792) used greatest stack depth: 10064 bytes left
LustreError: 14144:0:(mdt_reint.c:2523:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x2:0x0]/6 failed: rc = -2
Lustre: 30063:0:(mdd_dir.c:4741:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/2 is open, migrate only dentry
LustreError: 27335:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802ca55c138: inode [0x240000404:0x29:0x0] mdc close failed: rc = -2
Lustre: 27616:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8802bc000040 x1829759119837824/t4294969028(0) o101->fda5abe8-779b-4106-a7dd-765d8fa018f4@0@lo:700/0 lens 376/864 e 0 to 0 dl 1744994450 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0
Lustre: 31183:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8802e61ff840 x1829759120066048/t4294968806(0) o101->fda5abe8-779b-4106-a7dd-765d8fa018f4@0@lo:703/0 lens 376/816 e 0 to 0 dl 1744994453 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0
Lustre: 30542:0:(mdd_dir.c:4741:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/19 is open, migrate only dentry
LustreError: 30626:0:(mdt_reint.c:2523:mdt_reint_migrate()) lustre-MDT0000: migrate [0x280000403:0xf:0x0]/18 failed: rc = -2
LustreError: 30626:0:(mdt_reint.c:2523:mdt_reint_migrate()) Skipped 1 previous similar message
Lustre: 30089:0:(mdd_dir.c:4741:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x2:0x0]/4 is open, migrate only dentry
LustreError: 14144:0:(mdt_reint.c:2523:mdt_reint_migrate()) lustre-MDT0000: migrate [0x240000404:0x47:0x0]/13 failed: rc = -2
LustreError: 3276:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cc4e0958: inode [0x280000403:0x3c:0x0] mdc close failed: rc = -13
Lustre: 1922:0:(mdd_dir.c:4741:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/12 is open, migrate only dentry
LustreError: 1825:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) lustre-MDD0000: '15' migration was interrupted, run 'lfs migrate -m 2 -c 3 -H crush 15' to finish migration: rc = -1
LustreError: 1825:0:(mdt_reint.c:2523:mdt_reint_migrate()) lustre-MDT0000: migrate [0x240000403:0x1:0x0]/15 failed: rc = -1
LustreError: 1825:0:(mdt_reint.c:2523:mdt_reint_migrate()) Skipped 1 previous similar message
LustreError: 26644:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0002: unable to read [0x280000403:0x26:0x0] ACL: rc = -2
LustreError: 5142:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802ca55c138: inode [0x280000403:0x26:0x0] mdc close failed: rc = -2
LustreError: 31421:0:(mdt_reint.c:2523:mdt_reint_migrate()) lustre-MDT0001: migrate [0x280000403:0xf:0x0]/9 failed: rc = -2
LustreError: 31421:0:(mdt_reint.c:2523:mdt_reint_migrate()) Skipped 2 previous similar messages
Lustre: 14143:0:(mdd_dir.c:4741:mdd_migrate_object()) lustre-MDD0000: [0x280000403:0x1:0x0]/2 is open, migrate only dentry
Lustre: mdt00_062: service thread pid 8264 was inactive for 72.236 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Pid: 8264, comm: mdt00_062 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x943/0xd80 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x2ea/0x810 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1b3/0x470 [mdt]
[<0>] mdt_object_lock+0x88/0x1c0 [mdt]
[<0>] mdt_reint_setattr+0x1324/0x15f0 [mdt]
[<0>] mdt_reint_rec+0x87/0x240 [mdt]
[<0>] mdt_reint_internal+0x84f/0x13d0 [mdt]
[<0>] mdt_reint+0x67/0x150 [mdt]
[<0>] tgt_request_handle+0x74e/0x1a60 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x257/0xcd0 [ptlrpc]
[<0>] ptlrpc_main+0xc61/0x1640 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Pid: 30294, comm: mdt00_039 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x943/0xd80 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x2ea/0x810 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1b3/0x470 [mdt]
[<0>] mdt_object_lock_try+0xa0/0x250 [mdt]
[<0>] mdt_object_open_lock+0x6b9/0xc10 [mdt]
[<0>] mdt_reint_open+0x2401/0x2d70 [mdt]
[<0>] mdt_reint_rec+0x87/0x240 [mdt]
[<0>] mdt_reint_internal+0x84f/0x13d0 [mdt]
[<0>] mdt_intent_open+0x93/0x480 [mdt]
[<0>] mdt_intent_opc.constprop.74+0x211/0xc60 [mdt]
[<0>] mdt_intent_policy+0x10f/0x460 [mdt]
[<0>] ldlm_lock_enqueue+0x397/0x980 [ptlrpc]
[<0>] ldlm_handle_enqueue+0x547/0x18d0 [ptlrpc]
[<0>] tgt_enqueue+0x68/0x240 [ptlrpc]
[<0>] tgt_request_handle+0x74e/0x1a60 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x257/0xcd0 [ptlrpc]
[<0>] ptlrpc_main+0xc61/0x1640 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: mdt_io00_004: service thread pid 27643 was inactive for 74.233 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Lustre: Skipped 1 previous similar message
Pid: 27643, comm: mdt_io00_004 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Lustre: mdt_io00_014: service thread pid 31539 was inactive for 74.139 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Call Trace:
[<0>] ldlm_completion_ast+0x943/0xd80 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x2ea/0x810 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1b3/0x470 [mdt]
[<0>] mdt_object_lock+0x88/0x1c0 [mdt]
[<0>] mdt_rename_source_lock+0xa9/0xd6 [mdt]
[<0>] mdt_reint_migrate+0x1832/0x24b0 [mdt]
[<0>] mdt_reint_rec+0x87/0x240 [mdt]
[<0>] mdt_reint_internal+0x84f/0x13d0 [mdt]
[<0>] mdt_reint+0x67/0x150 [mdt]
[<0>] tgt_request_handle+0x74e/0x1a60 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x257/0xcd0 [ptlrpc]
[<0>] ptlrpc_main+0xc61/0x1640 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: mdt_io00_002: service thread pid 14144 was inactive for 74.163 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 2 previous similar messages
Lustre: mdt_io00_017: service thread pid 1922 was inactive for 74.036 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 4 previous similar messages
Lustre: mdt_io00_006: service thread pid 30063 was inactive for 74.267 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 5 previous similar messages
LustreError: 13902:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0001_UUID lock: ffff8802abf2c3c0/0x5da2367051dfa412 lrc: 3/0,0 mode: PW/PW res: [0x240000404:0xe:0x0].0x0 bits 0x4/0x0 rrc: 7 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x5da2367051dfa3fd expref: 133 pid: 27250 timeout: 298 lvb_type: 0
Lustre: mdt00_062: service thread pid 8264 completed after 100.417s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt00_039: service thread pid 30294 completed after 100.416s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: lustre-MDT0001-mdc-ffff8802cc4e0958: operation mds_close to node 0@lo failed: rc = -107
Lustre: lustre-MDT0001-mdc-ffff8802cc4e0958: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: lustre-MDT0001-mdc-ffff8802cc4e0958: This client was evicted by lustre-MDT0001; in progress operations using this service will fail.
LustreError: 2094:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cc4e0958: inode [0x240000403:0x60:0x0] mdc close failed: rc = -108
LustreError: 10529:0:(vvp_io.c:1903:vvp_io_init()) lustre: refresh file layout [0x240000404:0xe:0x0] error -108.
Lustre: lustre-MDT0001-mdc-ffff8802cc4e0958: Connection restored to (at 0@lo)
LustreError: 13902:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802c689da40/0x5da2367051dfb522 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x1d7:0x0].0x0 bits 0x1b/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x5da2367051dfb506 expref: 226 pid: 7037 timeout: 300 lvb_type: 0
LustreError: lustre-MDT0000-mdc-ffff8802cc4e0958: operation ldlm_enqueue to node 0@lo failed: rc = -107
Lustre: lustre-MDT0000-mdc-ffff8802cc4e0958: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
Lustre: mdt_io00_004: service thread pid 27643 completed after 103.457s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: lustre-MDT0000-mdc-ffff8802cc4e0958: This client was evicted by lustre-MDT0000; in progress operations using this service will fail.
LustreError: 10242:0:(vvp_io.c:1903:vvp_io_init()) lustre: refresh file layout [0x200000404:0x1d7:0x0] error -5.
LustreError: 10686:0:(file.c:6137:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000403:0x2:0x0] error: rc = -108
Lustre: mdt_io00_014: service thread pid 31539 completed after 103.624s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: 15669:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff8802cc4e0958: namespace resource [0x200000403:0xe7:0x0].0x0 (ffff8802acc3f740) refcount nonzero (1) after lock cleanup; forcing cleanup.
LustreError: 10629:0:(mdc_request.c:1469:mdc_read_page()) lustre-MDT0000-mdc-ffff8802cc4e0958: [0x200000402:0x7:0x0] lock enqueue fails: rc = -108
Lustre: dir [0x280000404:0x3f:0x0] stripe 1 readdir failed: -108, directory is partially accessed!
Lustre: lustre-MDT0000-mdc-ffff8802cc4e0958: Connection restored to (at 0@lo)
Lustre: mdt_io00_008: service thread pid 30431 completed after 103.805s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_016: service thread pid 1888 completed after 103.615s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_003: service thread pid 27637 completed after 104.015s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_002: service thread pid 14144 completed after 104.460s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_007: service thread pid 30089 completed after 104.314s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_012: service thread pid 31162 completed after 104.778s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_010: service thread pid 30626 completed after 104.666s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_005: service thread pid 28011 completed after 104.203s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_017: service thread pid 1922 completed after 104.059s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_015: service thread pid 1825 completed after 103.731s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: 10383:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802ca55c138: inode [0x280000403:0x3c:0x0] mdc close failed: rc = -13
LustreError: 10383:0:(file.c:247:ll_close_inode_openhandle()) Skipped 16 previous similar messages
Lustre: mdt_io00_011: service thread pid 30684 completed after 104.889s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_001: service thread pid 14143 completed after 106.926s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_013: service thread pid 31421 completed after 107.262s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_006: service thread pid 30063 completed after 105.557s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_018: service thread pid 13777 completed after 110.245s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: 17483:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0xf5:0x0]: rc = -5
LustreError: 17483:0:(llite_lib.c:3697:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
4[18420]: segfault at 8 ip 00007f6c1ad1f7e8 sp 00007ffce4394bd0 error 4 in ld-2.17.so[7f6c1ad14000+22000]
Lustre: 30089:0:(mdd_dir.c:4741:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x2:0x0]/2 is open, migrate only dentry
Lustre: 8264:0:(lod_lov.c:1417:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x137:0x0] with magic=0xbd60bd0
LustreError: 13902:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802a9a987c0/0x5da2367051e1b166 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x1e0:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x5da2367051e1b104 expref: 164 pid: 27779 timeout: 419 lvb_type: 0
LustreError: 31131:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802c8e837e8 ns: mdt-lustre-MDT0000_UUID lock: ffff880297fecf00/0x5da2367051e21e95 lrc: 3/0,0 mode: PR/PR res: [0x200000403:0x1:0x0].0x0 bits 0x13/0x0 rrc: 13 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x5da2367051e21e02 expref: 22 pid: 31131 timeout: 0 lvb_type: 0
LustreError: lustre-MDT0000-mdc-ffff8802ca55c138: operation ldlm_enqueue to node 0@lo failed: rc = -107
Lustre: lustre-MDT0000-mdc-ffff8802ca55c138: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: lustre-MDT0000-mdc-ffff8802ca55c138: This client was evicted by lustre-MDT0000; in progress operations using this service will fail.
LustreError: 19925:0:(file.c:6137:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000403:0x1:0x0] error: rc = -5
LustreError: 16953:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802ca55c138: inode [0x200000404:0xed:0x0] mdc close failed: rc = -108
LustreError: 18259:0:(mdc_request.c:1469:mdc_read_page()) lustre-MDT0000-mdc-ffff8802ca55c138: [0x200000400:0xb:0x0] lock enqueue fails: rc = -108
LustreError: 18259:0:(mdc_request.c:1469:mdc_read_page()) Skipped 1 previous similar message
Lustre: dir [0x200000403:0x136:0x0] stripe 0 readdir failed: -108, directory is partially accessed!
Lustre: Skipped 1 previous similar message
LustreError: 19925:0:(file.c:6137:ll_inode_revalidate_fini()) Skipped 33 previous similar messages
Lustre: lustre-MDT0000-mdc-ffff8802ca55c138: Connection restored to (at 0@lo)
LustreError: 31421:0:(mdt_reint.c:2523:mdt_reint_migrate()) lustre-MDT0001: migrate [0x200000403:0xcc:0x0]/14 failed: rc = -2
Lustre: 14143:0:(mdd_dir.c:4741:mdd_migrate_object()) lustre-MDD0002: [0x280000403:0x1:0x0]/11 is open, migrate only dentry
LustreError: 13985:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) lustre-MDD0001: '10' migration was interrupted, run 'lfs migrate -m 1 -c 3 -H crush 10' to finish migration: rc = -1
LustreError: 23748:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000403:0x1bd:0x0]: rc = -5
LustreError: 23748:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 1 previous similar message
LustreError: 23748:0:(llite_lib.c:3697:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 23748:0:(llite_lib.c:3697:ll_prep_inode()) Skipped 1 previous similar message
LustreError: 1888:0:(mdt_reint.c:2523:mdt_reint_migrate()) lustre-MDT0002: migrate [0x280000403:0xf:0x0]/4 failed: rc = -2
LustreError: 1888:0:(mdt_reint.c:2523:mdt_reint_migrate()) Skipped 2 previous similar messages
LustreError: 14144:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) lustre-MDD0002: '12' migration was interrupted, run 'lfs migrate -m 0 -c 3 -H crush 12' to finish migration: rc = -1
17[27430]: segfault at 8 ip 00007ff2486477e8 sp 00007ffd7c7087a0 error 4 in ld-2.17.so[7ff24863c000+22000]
LustreError: 31184:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802ca55c138: inode [0x240000404:0x118:0x0] mdc close failed: rc = -13
LustreError: 31184:0:(file.c:247:ll_close_inode_openhandle()) Skipped 11 previous similar messages
Lustre: 31162:0:(mdd_dir.c:4741:mdd_migrate_object()) lustre-MDD0001: [0x240000400:0xc:0x0]/11 is open, migrate only dentry
Lustre: 31162:0:(mdd_dir.c:4741:mdd_migrate_object()) Skipped 14 previous similar messages
LustreError: 31539:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) lustre-MDD0000: '9' migration was interrupted, run 'lfs migrate -m 0 -c 1 -H crush 9' to finish migration: rc = -1
LustreError: 27637:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) lustre-MDD0002: '18' migration was interrupted, run 'lfs migrate -m 1 -c 2 -H crush 18' to finish migration: rc = -1
Lustre: dir [0x240000404:0x18d:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
Lustre: dir [0x240000405:0x5e:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
LustreError: 10187:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000405:0xed:0x0]: rc = -5
LustreError: 10187:0:(llite_lib.c:3697:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 30542:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) lustre-MDD0002: '11' migration was interrupted, run 'lfs migrate -m 0 -c 2 -H crush 11' to finish migration: rc = -1
LustreError: 30542:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) Skipped 2 previous similar messages
LustreError: 30542:0:(mdt_reint.c:2523:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000403:0x1:0x0]/11 failed: rc = -1
LustreError: 30542:0:(mdt_reint.c:2523:mdt_reint_migrate()) Skipped 8 previous similar messages
LustreError: 27500:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0000: [0x200000405:0x1ed:0x0] migrate mdt count mismatch 1 != 2
LustreError: 7942:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000403:0x3eb:0x0]: rc = -5
LustreError: 7942:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 1 previous similar message
LustreError: 7942:0:(llite_lib.c:3697:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 7942:0:(llite_lib.c:3697:ll_prep_inode()) Skipped 1 previous similar message
LustreError: 25966:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000405:0x286:0x0]: rc = -5
LustreError: 25966:0:(llite_lib.c:3697:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
Lustre: dir [0x240000405:0x1fd:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 2 previous similar messages
LustreError: 31421:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) lustre-MDD0002: '1' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush 1' to finish migration: rc = -1
18[21022]: segfault at 8 ip 00007f919b4927e8 sp 00007ffd85d8d950 error 4 in ld-2.17.so[7f919b487000+22000]
LustreError: 21807:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x352:0x0]: rc = -5
LustreError: 21807:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 4 previous similar messages
LustreError: 21807:0:(llite_lib.c:3697:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 21807:0:(llite_lib.c:3697:ll_prep_inode()) Skipped 4 previous similar messages
Lustre: 712:0:(lod_lov.c:1417:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000404:0x36d:0x0] with magic=0xbd60bd0
Lustre: 712:0:(lod_lov.c:1417:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 952:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 10 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
Lustre: 27218:0:(lod_lov.c:1417:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000404:0x33a:0x0] with magic=0xbd60bd0
Lustre: 27218:0:(lod_lov.c:1417:lod_parse_striping()) Skipped 3 previous similar messages
LustreError: 26357:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000404:0x359:0x0]: rc = -5
LustreError: 26357:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 9 previous similar messages
LustreError: 26357:0:(llite_lib.c:3697:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 26357:0:(llite_lib.c:3697:ll_prep_inode()) Skipped 9 previous similar messages
Lustre: 30089:0:(mdd_dir.c:4741:mdd_migrate_object()) lustre-MDD0002: [0x200000403:0x1:0x0]/2 is open, migrate only dentry
Lustre: 30089:0:(mdd_dir.c:4741:mdd_migrate_object()) Skipped 23 previous similar messages
Lustre: 22500:0:(lod_lov.c:1417:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000404:0x4e4:0x0] with magic=0xbd60bd0
Lustre: 22500:0:(lod_lov.c:1417:lod_parse_striping()) Skipped 3 previous similar messages
LustreError: 231:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 10 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
Lustre: 20230:0:(lod_lov.c:1417:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0x5d6:0x0] with magic=0xbd60bd0
Lustre: 20230:0:(lod_lov.c:1417:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 5690:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cc4e0958: inode [0x200000406:0x354:0x0] mdc close failed: rc = -13
LustreError: 5668:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8802cc4e0958: cannot apply new layout on [0x240000404:0x414:0x0] : rc = -5
LustreError: 5668:0:(vvp_io.c:1903:vvp_io_init()) lustre: refresh file layout [0x240000404:0x414:0x0] error -5.
LustreError: 27639:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0000: [0x200000405:0x391:0x0] migrate mdt count mismatch 3 != 2
LustreError: 30063:0:(mdt_reint.c:2523:mdt_reint_migrate()) lustre-MDT0000: migrate [0x240000404:0x52e:0x0]/18 failed: rc = -2
LustreError: 30063:0:(mdt_reint.c:2523:mdt_reint_migrate()) Skipped 16 previous similar messages
LustreError: 31421:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) lustre-MDD0002: '15' migration was interrupted, run 'lfs migrate -m 0 -c 3 -H crush 15' to finish migration: rc = -1
LustreError: 31421:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) Skipped 3 previous similar messages
LustreError: 8917:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8802cc4e0958: cannot apply new layout on [0x240000404:0x414:0x0] : rc = -5
Lustre: 27639:0:(lod_lov.c:1417:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0x685:0x0] with magic=0xbd60bd0
Lustre: 27639:0:(lod_lov.c:1417:lod_parse_striping()) Skipped 9 previous similar messages
LustreError: 19106:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000406:0x3a1:0x0]: rc = -5
LustreError: 19106:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 29 previous similar messages
LustreError: 19106:0:(llite_lib.c:3697:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 19106:0:(llite_lib.c:3697:ll_prep_inode()) Skipped 29 previous similar messages
17[23536]: segfault at 8 ip 00007f55e86107e8 sp 00007ffe48ccb330 error 4 in ld-2.17.so[7f55e8605000+22000]
18[25018]: segfault at 8 ip 00007f8409f6d7e8 sp 00007ffd3c4e2b30 error 4 in ld-2.17.so[7f8409f62000+22000]
Lustre: dir [0x240000404:0x6a1:0x0] stripe 3 readdir failed: -2, directory is partially accessed!
Lustre: 542:0:(lod_lov.c:1417:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000405:0x53e:0x0] with magic=0xbd60bd0
Lustre: 542:0:(lod_lov.c:1417:lod_parse_striping()) Skipped 5 previous similar messages
16[7659]: segfault at 8 ip 00007f53f00007e8 sp 00007ffee8992c30 error 4 in ld-2.17.so[7f53efff5000+22000]
LustreError: 13902:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0002_UUID lock: ffff8802f1bda5c0/0x5da2367051f0f304 lrc: 3/0,0 mode: PR/PR res: [0x280000404:0x421:0x0].0x0 bits 0x13/0x0 rrc: 12 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x5da2367051f0f2f6 expref: 355 pid: 31183 timeout: 615 lvb_type: 0
LustreError: 14125:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export ffff88009f494138 ns: mdt-lustre-MDT0002_UUID lock: ffff88009006e1c0/0x5da2367051f1254d lrc: 3/0,0 mode: PR/PR res: [0x280000404:0x421:0x0].0x0 bits 0x1b/0x0 rrc: 8 type: IBT gid 0 flags: 0x50200400000020 nid: 0@lo remote: 0x5da2367051f1253f expref: 39 pid: 14125 timeout: 0 lvb_type: 0
LustreError: 19060:0:(ldlm_lockd.c:2549:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1744994853 with bad export cookie 6747015047791869097
Lustre: lustre-MDT0002-mdc-ffff8802cc4e0958: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: 14125:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) Skipped 7 previous similar messages
LustreError: lustre-MDT0002-mdc-ffff8802cc4e0958: This client was evicted by lustre-MDT0002; in progress operations using this service will fail.
LustreError: lustre-MDT0002-mdc-ffff8802cc4e0958: operation ldlm_enqueue to node 0@lo failed: rc = -107
LustreError: Skipped 6 previous similar messages
LustreError: 13183:0:(llite_lib.c:1996:ll_md_setattr()) md_setattr fails: rc = -5
LustreError: 27449:0:(file.c:6137:ll_inode_revalidate_fini()) lustre: revalidate FID [0x280000404:0x421:0x0] error: rc = -5
LustreError: 26524:0:(vvp_io.c:1903:vvp_io_init()) lustre: refresh file layout [0x280000404:0x421:0x0] error -108.
LustreError: 26940:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cc4e0958: inode [0x280000404:0x421:0x0] mdc close failed: rc = -108
LustreError: 26940:0:(file.c:247:ll_close_inode_openhandle()) Skipped 3 previous similar messages
Lustre: lustre-MDT0002-mdc-ffff8802cc4e0958: Connection restored to (at 0@lo)
Lustre: lustre-MDT0002: trigger partial OI scrub for RPC inconsistency, checking FID [0x280000404:0x691:0x0]/0xa): rc = 0
LustreError: 30431:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) lustre-MDD0001: '12' migration was interrupted, run 'lfs migrate -m 2 -c 1 -H crush 12' to finish migration: rc = -1
LustreError: 30431:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) Skipped 4 previous similar messages
LustreError: 19954:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000405:0x52a:0x0]: rc = -5
LustreError: 19954:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 18 previous similar messages
LustreError: 19954:0:(llite_lib.c:3697:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 19954:0:(llite_lib.c:3697:ll_prep_inode()) Skipped 18 previous similar messages
Lustre: 1825:0:(mdd_dir.c:4741:mdd_migrate_object()) lustre-MDD0002: [0x240000403:0x1:0x0]/6 is open, migrate only dentry
Lustre: 1825:0:(mdd_dir.c:4741:mdd_migrate_object()) Skipped 41 previous similar messages
Lustre: 31131:0:(lod_lov.c:1417:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000405:0x52a:0x0] with magic=0xbd60bd0
Lustre: 31131:0:(lod_lov.c:1417:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 30436:0:(llite_lib.c:1845:ll_update_lsm_md()) lustre: [0x240000405:0x886:0x0] dir layout mismatch:
LustreError: 30436:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=3 count=2 index=1 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool=
LustreError: 30436:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) stripe[0] [0x240000400:0x32:0x0]
LustreError: 30436:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=3 index=1 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=fnv_1a_64:2 pool=
Lustre: dir [0x240000405:0x886:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 1 previous similar message
14[2808]: segfault at 8 ip 00007fce3e23c7e8 sp 00007ffc4fec1fc0 error 4 in ld-2.17.so[7fce3e231000+22000]
16[6072]: segfault at 8 ip 00007ff408d547e8 sp 00007fff0ce5ccb0 error 4 in ld-2.17.so[7ff408d49000+22000]
LustreError: 15637:0:(mdt_reint.c:2523:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000403:0x1:0x0]/1 failed: rc = -1
LustreError: 15637:0:(mdt_reint.c:2523:mdt_reint_migrate()) Skipped 24 previous similar messages
LustreError: 59:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 17 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
9[11316]: segfault at 0 ip (null) sp 00007ffda30e65a8 error 14 in 9[400000+6000]
7[15211]: segfault at 8 ip 00007fb5149f07e8 sp 00007ffe97d0d1e0 error 4 in ld-2.17.so[7fb5149e5000+22000]
7[15212]: segfault at 8 ip 00007f85275237e8 sp 00007ffebdb511b0 error 4 in ld-2.17.so[7f8527518000+22000]
7[15207]: segfault at 8 ip 00007fa4011be7e8 sp 00007ffd7251c160 error 4 in ld-2.17.so[7fa4011b3000+22000]
LustreError: 31162:0:(lustre_lmv.h:500:lmv_is_sane()) unknown layout LMV: magic=0xcd40cd0 count=2 index=1 hash=crush:0x82000003 version=1 migrate_offset=1 migrate_hash=fnv_1a_64:2 pool=
LustreError: 7169:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8802cc4e0958: cannot apply new layout on [0x240000404:0xb68:0x0] : rc = -5
LustreError: 7169:0:(lov_object.c:1341:lov_layout_change()) Skipped 1 previous similar message
LustreError: 7169:0:(vvp_io.c:1903:vvp_io_init()) lustre: refresh file layout [0x240000404:0xb68:0x0] error -5.
LustreError: 7169:0:(vvp_io.c:1903:vvp_io_init()) Skipped 1 previous similar message
Lustre: dir [0x280000405:0xc7:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 14 previous similar messages
LustreError: 27500:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0002: [0x280000405:0x157:0x0] migrate mdt count mismatch 1 != 3
LustreError: 149:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 17 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
LustreError: 28968:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8802cc4e0958: cannot apply new layout on [0x240000404:0xb68:0x0] : rc = -5
Lustre: 10563:0:(lod_lov.c:1417:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000404:0xb1a:0x0] with magic=0xbd60bd0
Lustre: 10563:0:(lod_lov.c:1417:lod_parse_striping()) Skipped 35 previous similar messages
LustreError: 26274:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0xb3f:0x0]: rc = -5
LustreError: 26274:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 65 previous similar messages
LustreError: 26274:0:(llite_lib.c:3697:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 26274:0:(llite_lib.c:3697:ll_prep_inode()) Skipped 66 previous similar messages
LustreError: 31162:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) lustre-MDD0001: '16' migration was interrupted, run 'lfs migrate -m 2 -c 3 -H crush 16' to finish migration: rc = -1
LustreError: 31162:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) Skipped 9 previous similar messages
LustreError: 6644:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cc4e0958: inode [0x200000406:0xbc5:0x0] mdc close failed: rc = -2
LustreError: 6644:0:(file.c:247:ll_close_inode_openhandle()) Skipped 25 previous similar messages
LustreError: 31206:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0000: [0x200000406:0xbc1:0x0] migrate mdt count mismatch 1 != 2
LustreError: 6654:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) lustre: failure inode [0x280000404:0xb5a:0x0] get parent: rc = -116
LustreError: 5231:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8802cc4e0958: cannot apply new layout on [0x240000404:0xb68:0x0] : rc = -5
LustreError: 14474:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8802cc4e0958: cannot apply new layout on [0x240000404:0xb68:0x0] : rc = -5
LustreError: 1825:0:(lustre_lmv.h:500:lmv_is_sane()) unknown layout LMV: magic=0xcd40cd0 count=4 index=3 hash=crush:0x82000003 version=1 migrate_offset=3 migrate_hash=fnv_1a_64:2 pool=
16[26916]: segfault at 8 ip 00007fc3a266d7e8 sp 00007fffb9030080 error 4 in ld-2.17.so[7fc3a2662000+22000]
LustreError: 23669:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8802cc4e0958: cannot apply new layout on [0x240000404:0xb68:0x0] : rc = -5
LustreError: 23669:0:(lov_object.c:1341:lov_layout_change()) Skipped 1 previous similar message
Lustre: dir [0x200000405:0xbc4:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 1 previous similar message
17[29127]: segfault at 8 ip 00007f1bf34d77e8 sp 00007fff825f6010 error 4 in ld-2.17.so[7f1bf34cc000+22000]
LustreError: 7134:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0002: [0x280000404:0xe69:0x0] migrate mdt count mismatch 3 != 1
LustreError: 7134:0:(mdt_xattr.c:402:mdt_dir_layout_update()) Skipped 1 previous similar message
LustreError: 27671:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0001: [0x240000405:0xb44:0x0] migrate mdt count mismatch 2 != 3
LustreError: 7006:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8802cc4e0958: cannot apply new layout on [0x240000404:0xb68:0x0] : rc = -5
Lustre: 26973:0:(lod_lov.c:1417:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000405:0xec1:0x0] with magic=0xbd60bd0
Lustre: 26973:0:(lod_lov.c:1417:lod_parse_striping()) Skipped 11 previous similar messages
LustreError: 7558:0:(vvp_io.c:1903:vvp_io_init()) lustre: refresh file layout [0x240000405:0xbab:0x0] error -5.
LustreError: 14592:0:(llite_lib.c:1845:ll_update_lsm_md()) lustre: [0x280000405:0x6bb:0x0] dir layout mismatch:
LustreError: 14592:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=2 count=3 index=2 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool=
LustreError: 14592:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) stripe[0] [0x280000400:0x41:0x0]
LustreError: 14592:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) Skipped 4 previous similar messages
LustreError: 14592:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=4 index=2 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=3 migrate_hash=fnv_1a_64:2 pool=
LustreError: 127:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 14 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
6[24584]: segfault at 8 ip 00007f209042a7e8 sp 00007ffc86de9020 error 4 in ld-2.17.so[7f209041f000+22000]
LustreError: 17078:0:(vvp_io.c:1903:vvp_io_init()) lustre: refresh file layout [0x240000404:0x120f:0x0] error -5.
Lustre: 30089:0:(mdd_dir.c:4741:mdd_migrate_object()) lustre-MDD0002: [0x280000402:0x40:0x0]/18 is open, migrate only dentry
Lustre: 30089:0:(mdd_dir.c:4741:mdd_migrate_object()) Skipped 54 previous similar messages
LustreError: 27812:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000404:0x12d3:0x0]: rc = -2
LustreError: 14129:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000404:0x12d3:0x0]: rc = -2
LustreError: 4868:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8802ca55c138: cannot apply new layout on [0x240000405:0xbab:0x0] : rc = -5
LustreError: 4868:0:(lov_object.c:1341:lov_layout_change()) Skipped 7 previous similar messages
LustreError: 31195:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0000: [0x200000405:0xe86:0x0] migrate mdt count mismatch 1 != 3
LustreError: 30542:0:(mdt_reint.c:2523:mdt_reint_migrate()) lustre-MDT0000: migrate [0x240000403:0x1:0x0]/17 failed: rc = -1
LustreError: 30542:0:(mdt_reint.c:2523:mdt_reint_migrate()) Skipped 52 previous similar messages
LustreError: 3961:0:(vvp_io.c:1903:vvp_io_init()) lustre: refresh file layout [0x200000406:0x130b:0x0] error -5.
9[16664]: segfault at 8 ip 00007f3cd2bf97e8 sp 00007ffeb7f22ca0 error 4 in ld-2.17.so[7f3cd2bee000+22000]
8[23195]: segfault at 8 ip 00007ff048b797e8 sp 00007ffe2f1ca500 error 4 in ld-2.17.so[7ff048b6e000+22000]
LustreError: 152:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 4 [0x200000406:0x136e:0x0] inode@0000000000000000: rc = -5
LustreError: 152:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 2 previous similar messages
LustreError: 4904:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0000: [0x200000405:0x113e:0x0] migrate mdt count mismatch 2 != 1
LustreError: 4904:0:(mdt_xattr.c:402:mdt_dir_layout_update()) Skipped 2 previous similar messages
Lustre: lustre-MDT0000: trigger partial OI scrub for RPC inconsistency, checking FID [0x200000405:0x115e:0x0]/0xa): rc = 0
LustreError: 9497:0:(osd_index.c:204:__osd_xattr_load_by_oid()) lustre-MDT0000: can't get bonus, rc = -2
LustreError: 10496:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000405:0xc4a:0x0]: rc = -5
LustreError: 10496:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 136 previous similar messages
LustreError: 10496:0:(llite_lib.c:3697:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 10496:0:(llite_lib.c:3697:ll_prep_inode()) Skipped 137 previous similar messages
LustreError: 14144:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) lustre-MDD0002: '5' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush 5' to finish migration: rc = -1
LustreError: 14144:0:(mdd_dir.c:4662:mdd_migrate_cmd_check()) Skipped 28 previous similar messages
Lustre: dir [0x240000404:0x175f:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 25 previous similar messages
LustreError: 12999:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cc4e0958: inode [0x200000405:0x10d5:0x0] mdc close failed: rc = -2
LustreError: 12999:0:(file.c:247:ll_close_inode_openhandle()) Skipped 7 previous similar messages
LustreError: 31156:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000406:0x1740:0x0] ACL: rc = -2
hrtimer: interrupt took 6818462 ns
LustreError: 28348:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8802ca55c138: cannot apply new layout on [0x240000405:0x1639:0x0] : rc = -5
LustreError: 28348:0:(lov_object.c:1341:lov_layout_change()) Skipped 1 previous similar message
LustreError: 28348:0:(vvp_io.c:1903:vvp_io_init()) lustre: refresh file layout [0x240000405:0x1639:0x0] error -5.
Lustre: lustre-OST0000: already connected client lustre-MDT0002-mdtlov_UUID (at 0@lo) with handle 0x5da2367051d9788f. Rejecting client with the same UUID trying to reconnect with handle 0x720ebeb18f7fe12d
Lustre: lustre-OST0000: Client lustre-MDT0000-mdtlov_UUID (at 192.168.123.101@tcp) refused connection, still busy with 16 references
Lustre: Skipped 2 previous similar messages
Lustre: lustre-MDT0001: trigger partial OI scrub for RPC inconsistency, checking FID [0x240000405:0x159d:0x0]/0xa): rc = 0
2[17752]: segfault at 8 ip 00007f7e27ac57e8 sp 00007ffd00613160 error 4 in ld-2.17.so[7f7e27aba000+22000]
2[17863]: segfault at 8 ip 00007f2c291e17e8 sp 00007ffdc834dd80 error 4 in ld-2.17.so[7f2c291d6000+22000]
Lustre: 26885:0:(lod_lov.c:1417:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000405:0xc96:0x0] with magic=0xbd60bd0
Lustre: 26885:0:(lod_lov.c:1417:lod_parse_striping()) Skipped 61 previous similar messages
LustreError: 618:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 20 [0x240000405:0x16a3:0x0] inode@0000000000000000: rc = -5
LustreError: 618:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 1 previous similar message
LustreError: 27239:0:(mdt_open.c:1703:mdt_reint_open()) lustre-MDT0000: name '17' present, but FID [0x200000406:0x18fd:0x0] is invalid
LustreError: 27566:0:(mdt_open.c:1703:mdt_reint_open()) lustre-MDT0000: name '17' present, but FID [0x200000406:0x18fd:0x0] is invalid
LustreError: 27500:0:(mdt_open.c:1703:mdt_reint_open()) lustre-MDT0000: name '17' present, but FID [0x200000406:0x18fd:0x0] is invalid
LustreError: 26808:0:(mdt_open.c:1703:mdt_reint_open()) lustre-MDT0000: name '17' present, but FID [0x200000406:0x18fd:0x0] is invalid
LustreError: 26808:0:(mdt_open.c:1703:mdt_reint_open()) Skipped 1 previous similar message
LustreError: 27369:0:(mdt_open.c:1703:mdt_reint_open()) lustre-MDT0000: name '17' present, but FID [0x200000406:0x18fd:0x0] is invalid
LustreError: 27369:0:(mdt_open.c:1703:mdt_reint_open()) Skipped 2 previous similar messages
LustreError: 27616:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0001: [0x240000405:0x16fe:0x0] migrate mdt count mismatch 3 != 1
LustreError: 27616:0:(mdt_xattr.c:402:mdt_dir_layout_update()) Skipped 1 previous similar message
LustreError: 27508:0:(mdt_open.c:1703:mdt_reint_open()) lustre-MDT0000: name '17' present, but FID [0x200000406:0x18fd:0x0] is invalid
LustreError: 27508:0:(mdt_open.c:1703:mdt_reint_open()) Skipped 4 previous similar messages
ptlrpc_watchdog_fire: 16 callbacks suppressed
Lustre: mdt00_011: service thread pid 26976 was inactive for 40.006 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Pid: 26976, comm: mdt00_011 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x943/0xd80 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x2ea/0x810 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1b3/0x470 [mdt]
[<0>] mdt_object_lock+0x88/0x1c0 [mdt]
[<0>] mdt_getattr_name_lock+0xf6/0x2cc0 [mdt]
[<0>] mdt_intent_getattr+0x2cc/0x4e0 [mdt]
[<0>] mdt_intent_opc.constprop.74+0x211/0xc60 [mdt]
[<0>] mdt_intent_policy+0x10f/0x460 [mdt]
[<0>] ldlm_lock_enqueue+0x397/0x980 [ptlrpc]
[<0>] ldlm_handle_enqueue+0x547/0x18d0 [ptlrpc]
[<0>] tgt_enqueue+0x68/0x240 [ptlrpc]
[<0>] tgt_request_handle+0x74e/0x1a60 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x257/0xcd0 [ptlrpc]
[<0>] ptlrpc_main+0xc61/0x1640 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Pid: 23219, comm: mdt00_075 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x943/0xd80 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x2ea/0x810 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1b3/0x470 [mdt]
[<0>] mdt_object_lock+0x88/0x1c0 [mdt]
[<0>] mdt_getattr_name_lock+0xf6/0x2cc0 [mdt]
[<0>] mdt_intent_getattr+0x2cc/0x4e0 [mdt]
[<0>] mdt_intent_opc.constprop.74+0x211/0xc60 [mdt]
[<0>] mdt_intent_policy+0x10f/0x460 [mdt]
[<0>] ldlm_lock_enqueue+0x397/0x980 [ptlrpc]
[<0>] ldlm_handle_enqueue+0x547/0x18d0 [ptlrpc]
[<0>] tgt_enqueue+0x68/0x240 [ptlrpc]
[<0>] tgt_request_handle+0x74e/0x1a60 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x257/0xcd0 [ptlrpc]
[<0>] ptlrpc_main+0xc61/0x1640 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: mdt_io00_017: service thread pid 1922 was inactive for 74.277 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Lustre: Skipped 1 previous similar message
Pid: 1922, comm: mdt_io00_017 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x943/0xd80 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x2ea/0x810 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1b3/0x470 [mdt]
[<0>] mdt_object_lock+0x88/0x1c0 [mdt]
[<0>] mdt_rename_source_lock+0xa9/0xd6 [mdt]
[<0>] mdt_reint_migrate+0x1832/0x24b0 [mdt]
[<0>] mdt_reint_rec+0x87/0x240 [mdt]
[<0>] mdt_reint_internal+0x84f/0x13d0 [mdt]
[<0>] mdt_reint+0x67/0x150 [mdt]
[<0>] tgt_request_handle+0x74e/0x1a60 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x257/0xcd0 [ptlrpc]
[<0>] ptlrpc_main+0xc61/0x1640 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: mdt_io00_006: service thread pid 30063 was inactive for 74.245 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 1 previous similar message
Lustre: mdt_io00_014: service thread pid 31539 was inactive for 74.201 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: mdt_io00_003: service thread pid 27637 was inactive for 74.045 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: mdt_io00_002: service thread pid 14144 was inactive for 74.195 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 5 previous similar messages
Lustre: mdt_io00_015: service thread pid 1825 was inactive for 74.181 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 4 previous similar messages
Lustre: mdt_io00_016: service thread pid 1888 was inactive for 74.207 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 1 previous similar message
LustreError: 13902:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0002_UUID lock: ffff88025765e1c0/0x5da236705252c3e7 lrc: 3/0,0 mode: PR/PR res: [0x280000404:0x11e4:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x5da236705252c39a expref: 647 pid: 26964 timeout: 1295 lvb_type: 0
Lustre: mdt_io00_017: service thread pid 1922 completed after 107.072s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt00_011: service thread pid 26976 completed after 100.196s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_006: service thread pid 30063 completed after 106.790s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: lustre-MDT0002-mdc-ffff8802ca55c138: operation mds_getattr_lock to node 0@lo failed: rc = -107
LustreError: Skipped 1 previous similar message
Lustre: lustre-MDT0002-mdc-ffff8802ca55c138: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
Lustre: mdt00_075: service thread pid 23219 completed after 100.191s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: lustre-MDT0002-mdc-ffff8802ca55c138: This client was evicted by lustre-MDT0002; in progress operations using this service will fail.
Lustre: mdt_io00_014: service thread pid 31539 completed after 106.266s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: 8547:0:(llite_lib.c:1996:ll_md_setattr()) md_setattr fails: rc = -108
LustreError: 8547:0:(llite_lib.c:1996:ll_md_setattr()) Skipped 4 previous similar messages
Lustre: lustre-MDT0002-mdc-ffff8802ca55c138: Connection restored to (at 0@lo)
LustreError: 31198:0:(mdt_open.c:1703:mdt_reint_open()) lustre-MDT0000: name '17' present, but FID [0x200000406:0x18fd:0x0] is invalid
Lustre: mdt_io00_003: service thread pid 27637 completed after 105.362s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_007: service thread pid 30089 completed after 105.174s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_010: service thread pid 30626 completed after 104.949s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_001: service thread pid 14143 completed after 103.875s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_018: service thread pid 13777 completed after 104.023s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_011: service thread pid 30684 completed after 103.776s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_002: service thread pid 14144 completed after 103.761s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_020: service thread pid 15637 completed after 103.585s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_013: service thread pid 31421 completed after 102.067s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_009: service thread pid 30542 completed after 101.549s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_019: service thread pid 13985 completed after 100.498s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_015: service thread pid 1825 completed after 99.945s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_008: service thread pid 30431 completed after 98.171s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_016: service thread pid 1888 completed after 89.040s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
8[14590]: segfault at 0 ip (null) sp 00007ffece8032c8 error 14 in 8[400000+6000]
LustreError: 30542:0:(lustre_lmv.h:500:lmv_is_sane()) unknown layout LMV: magic=0xcd40cd0 count=2 index=1 hash=crush:0x82000003 version=1 migrate_offset=1 migrate_hash=fnv_1a_64:2 pool=
LustreError: 6291:0:(vvp_io.c:1903:vvp_io_init()) lustre: refresh file layout [0x200000405:0x18c7:0x0] error -5.
Lustre: dir [0x200000405:0x18e2:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 4 previous similar messages
LustreError: 26644:0:(mdt_open.c:1703:mdt_reint_open()) lustre-MDT0000: name '17' present, but FID [0x200000406:0x18fd:0x0] is invalid
LustreError: 26644:0:(mdt_open.c:1703:mdt_reint_open()) Skipped 14 previous similar messages
LustreError: 7037:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000406:0x1e85:0x0] ACL: rc = -2
LustreError: 510:0:(mdt_xattr.c:402:mdt_dir_layout_update()) lustre-MDT0001: [0x240000404:0x1ba4:0x0] migrate mdt count mismatch 1 != 3
LustreError: 510:0:(mdt_xattr.c:402:mdt_dir_layout_update()) Skipped 1 previous similar message
17[18263]: segfault at 0 ip (null) sp 00007fffb1857c08 error 14 in 17[400000+6000]
LustreError: 26107:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8802ca55c138: cannot apply new layout on [0x200000405:0x18c7:0x0] : rc = -5
LustreError: 26107:0:(lov_object.c:1341:lov_layout_change()) Skipped 4 previous similar messages
LustreError: 27398:0:(mdt_open.c:1703:mdt_reint_open()) lustre-MDT0000: name '17' present, but FID [0x200000406:0x18fd:0x0] is invalid
LustreError: 27398:0:(mdt_open.c:1703:mdt_reint_open()) Skipped 22 previous similar messages
LustreError: 151:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 13 [0x0:0x0:0x0] inode@0000000000000000: rc = -5
LustreError: 151:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 1 previous similar message
Lustre: 1922:0:(mdd_dir.c:4741:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/19 is open, migrate only dentry
Lustre: 1922:0:(mdd_dir.c:4741:mdd_migrate_object()) Skipped 78 previous similar messages
14[15878]: segfault at 8 ip 00007fb78338b7e8 sp 00007ffd16082630 error 4 in ld-2.17.so[7fb783380000+22000]
LustreError: 27643:0:(mdt_reint.c:2523:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000405:0x18e2:0x0]/6 failed: rc = -2
LustreError: 27643:0:(mdt_reint.c:2523:mdt_reint_migrate()) Skipped 82 previous similar messages
Link to test
racer test 1: racer on clients: centos-105.localnet DURATION=2700
LustreError: 7516:0:(mdd_dir.c:231:mdd_parent_fid()) ASSERTION( S_ISDIR(mdd_object_type(obj)) ) failed: lustre-MDD0000: FID [0x200000003:0xa:0x0] is not a directory type = 100000
LustreError: 7516:0:(mdd_dir.c:231:mdd_parent_fid()) LBUG
CPU: 2 PID: 7516 Comm: mdt_io01_006 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #2
Hardware name: Red Hat KVM, BIOS 1.16.0-3.module_el8.7.0+1218+f626c2ff 04/01/2014
Call Trace:
[<ffffffff817d93f8>] dump_stack+0x19/0x1b
[<ffffffffa016ea9d>] lbug_with_loc+0x4d/0xb0 [libcfs]
[<ffffffffa1217195>] mdd_parent_fid+0x395/0x3d0 [mdd]
[<ffffffffa12175b0>] mdd_is_parent+0xd0/0x1a0 [mdd]
[<ffffffffa121788c>] mdd_is_subdir+0x20c/0x250 [mdd]
[<ffffffffa12b0020>] mdt_reint_rename+0x1020/0x2c20 [mdt]
[<ffffffffa0402ece>] ? lu_ucred+0x1e/0x30 [obdclass]
[<ffffffffa12a4995>] ? mdt_ucred+0x15/0x20 [mdt]
[<ffffffffa12bb1d7>] mdt_reint_rec+0x87/0x240 [mdt]
[<ffffffffa128f89c>] mdt_reint_internal+0x74c/0xbc0 [mdt]
[<ffffffffa1297615>] ? mdt_thread_info_init+0xa5/0xc0 [mdt]
[<ffffffffa129a337>] mdt_reint+0x67/0x150 [mdt]
[<ffffffffa06efa4e>] tgt_request_handle+0x74e/0x1a50 [ptlrpc]
[<ffffffffa0632a63>] ptlrpc_server_handle_request+0x273/0xcc0 [ptlrpc]
[<ffffffffa063483e>] ptlrpc_main+0xc7e/0x1690 [ptlrpc]
[<ffffffff810dbb51>] ? put_prev_entity+0x31/0x400
[<ffffffffa0633bc0>] ? ptlrpc_wait_event+0x630/0x630 [ptlrpc]
[<ffffffff810ba114>] kthread+0xe4/0xf0
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
[<ffffffff817ede5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
LustreError: 25990:0:(mdt_handler.c:777:mdt_pack_acl2body()) lustre-MDT0001: unable to read [0x240000403:0xe:0x0] ACL: rc = -2
LustreError: 28747:0:(mdt_handler.c:777:mdt_pack_acl2body()) lustre-MDT0001: unable to read [0x240000404:0x9:0x0] ACL: rc = -2
LustreError: 18590:0:(mdt_reint.c:2533:mdt_reint_migrate()) lustre-MDT0001: migrate [0x240000403:0x1:0x0]/10 failed: rc = -114
LustreError: 18575:0:(mdt_reint.c:2533:mdt_reint_migrate()) lustre-MDT0001: migrate [0x240000403:0x1:0x0]/14 failed: rc = -114
LustreError: 5523:0:(file.c:262:ll_close_inode_openhandle()) lustre-clilmv-ffff880075cd2e98: inode [0x240000403:0x1d:0x0] mdc close failed: rc = -116
LustreError: 18590:0:(mdt_reint.c:2533:mdt_reint_migrate()) lustre-MDT0002: migrate [0x280000403:0x1:0x0]/12 failed: rc = -2
LustreError: 5696:0:(mdt_reint.c:2533:mdt_reint_migrate()) lustre-MDT0002: migrate [0x280000403:0x1:0x0]/8 failed: rc = -114
LustreError: 4307:0:(file.c:262:ll_close_inode_openhandle()) lustre-clilmv-ffff880075cd2e98: inode [0x280000404:0xa:0x0] mdc close failed: rc = -116
LustreError: 6474:0:(file.c:262:ll_close_inode_openhandle()) lustre-clilmv-ffff880075cd2e98: inode [0x200000404:0x93:0x0] mdc close failed: rc = -116
LustreError: 6474:0:(file.c:262:ll_close_inode_openhandle()) Skipped 1 previous similar message
LustreError: 6688:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0002: '17' migration was interrupted, run 'lfs migrate -m 2 -c 3 -H crush 17' to finish migration: rc = -1
LustreError: 6688:0:(mdt_reint.c:2533:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000403:0x2:0x0]/17 failed: rc = -1
LustreError: 6688:0:(mdt_reint.c:2533:mdt_reint_migrate()) Skipped 7 previous similar messages
Lustre: 28611:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88028ff82d40 x1806672389173632/t4294969214(0) o101->8c2bc090-fc6a-407a-a331-f5ba4a212e21@0@lo:1/0 lens 376/816 e 0 to 0 dl 1722977196 ref 1 fl Interpret:H/202/0 rc 0/0 job:'dd.0' uid:0 gid:0
LustreError: 28224:0:(mdt_handler.c:777:mdt_pack_acl2body()) lustre-MDT0002: unable to read [0x280000404:0xb9:0x0] ACL: rc = -2
9[7738]: segfault at 8 ip 00007f4f0db787e8 sp 00007ffc17f5cb40 error 4 in ld-2.17.so[7f4f0db6d000+22000]
LustreError: 7802:0:(file.c:262:ll_close_inode_openhandle()) lustre-clilmv-ffff880075cd2e98: inode [0x280000403:0x12:0x0] mdc close failed: rc = -116
LustreError: 7802:0:(file.c:262:ll_close_inode_openhandle()) Skipped 1 previous similar message
Lustre: 18526:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0x92:0x0] with magic=0xbd60bd0
LustreError: 8243:0:(mdt_reint.c:2533:mdt_reint_migrate()) lustre-MDT0001: migrate [0x240000403:0x93:0x0]/19 failed: rc = -16
LustreError: 8243:0:(mdt_reint.c:2533:mdt_reint_migrate()) Skipped 8 previous similar messages
LustreError: 18578:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0002: '14' migration was interrupted, run 'lfs migrate -m 1 -c 3 -H crush 14' to finish migration: rc = -1
Lustre: 18526:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880237411940 x1806672390938368/t4294970251(0) o101->8c2bc090-fc6a-407a-a331-f5ba4a212e21@0@lo:13/0 lens 376/840 e 0 to 0 dl 1722977208 ref 1 fl Interpret:H/202/0 rc 0/0 job:'dd.0' uid:0 gid:0
Lustre: 5561:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff880237410040 x1806672391291904/t4294971573(0) o101->02847ad6-cd2d-4f3b-9a48-5c4e4eeba476@0@lo:15/0 lens 376/816 e 0 to 0 dl 1722977210 ref 1 fl Interpret:H/202/0 rc 0/0 job:'dd.0' uid:0 gid:0
Lustre: 18523:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x13a:0x0] with magic=0xbd60bd0
Lustre: 18523:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 6688:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0002: '16' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush 16' to finish migration: rc = -1
LustreError: 15907:0:(file.c:262:ll_close_inode_openhandle()) lustre-clilmv-ffff88009f0c2e98: inode [0x240000404:0x86:0x0] mdc close failed: rc = -116
LustreError: 18583:0:(mdt_reint.c:2533:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000404:0x168:0x0]/18 failed: rc = -114
LustreError: 18583:0:(mdt_reint.c:2533:mdt_reint_migrate()) Skipped 26 previous similar messages
LustreError: 18959:0:(file.c:262:ll_close_inode_openhandle()) lustre-clilmv-ffff880075cd2e98: inode [0x280000404:0x183:0x0] mdc close failed: rc = -116
LustreError: 18959:0:(file.c:262:ll_close_inode_openhandle()) Skipped 1 previous similar message
LustreError: 5696:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0002: '18' migration was interrupted, run 'lfs migrate -m 0 -c 3 -H crush 18' to finish migration: rc = -1
Lustre: lustre-MDT0000: trigger partial OI scrub for RPC inconsistency, checking FID [0x200000404:0x168:0x0]/0xa): rc = 0
LustreError: 27804:0:(osd_index.c:201:__osd_xattr_load_by_oid()) lustre-MDT0000: can't get bonus, rc = -2
LustreError: 8139:0:(mdt_reint.c:2533:mdt_reint_migrate()) lustre-MDT0001: migrate [0x240000403:0x1:0x0]/3 failed: rc = -16
LustreError: 8139:0:(mdt_reint.c:2533:mdt_reint_migrate()) Skipped 22 previous similar messages
LustreError: 6688:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0001: '7' migration was interrupted, run 'lfs migrate -m 2 -c 1 -H crush 7' to finish migration: rc = -1
LustreError: 3656:0:(mdt_xattr.c:415:mdt_dir_layout_update()) lustre-MDT0000: [0x200000404:0x168:0x0] migrate mdt count mismatch 1 != 2
Lustre: 25990:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x1de:0x0] with magic=0xbd60bd0
Lustre: 25990:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 25503:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x303:0x0]: rc = -5
LustreError: 25503:0:(llite_lib.c:3739:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
Lustre: 3279:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x319:0x0] with magic=0xbd60bd0
Lustre: 3279:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 17 previous similar messages
LustreError: 25503:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x1eb:0x0]: rc = -5
LustreError: 25503:0:(llite_lib.c:3739:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
Lustre: 11713:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000404:0x227:0x0] with magic=0xbd60bd0
Lustre: 11713:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 3345:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000404:0x206:0x0]: rc = -5
LustreError: 3345:0:(llite_lib.c:3739:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
Lustre: 11727:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0x326:0x0] with magic=0xbd60bd0
Lustre: 11727:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 33 previous similar messages
LustreError: 6452:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0000: '4' migration was interrupted, run 'lfs migrate -m 0 -c 2 -H crush 4' to finish migration: rc = -1
Lustre: dir [0x240000403:0x28e:0x0] stripe 3 readdir failed: -2, directory is partially accessed!
LustreError: 11075:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x24a:0x0]: rc = -5
LustreError: 11075:0:(llite_lib.c:3739:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 12953:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x481:0x0]: rc = -5
LustreError: 12953:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 6 previous similar messages
LustreError: 12953:0:(llite_lib.c:3739:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 12953:0:(llite_lib.c:3739:ll_prep_inode()) Skipped 6 previous similar messages
Lustre: dir [0x240000403:0x468:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
Lustre: 5432:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000404:0x24b:0x0] with magic=0xbd60bd0
Lustre: 5432:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 14890:0:(file.c:262:ll_close_inode_openhandle()) lustre-clilmv-ffff88009f0c2e98: inode [0x280000403:0x33b:0x0] mdc close failed: rc = -2
LustreError: 14890:0:(file.c:262:ll_close_inode_openhandle()) Skipped 1 previous similar message
LustreError: 18823:0:(mdt_reint.c:2533:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x1:0x0]/9 failed: rc = -114
LustreError: 18823:0:(mdt_reint.c:2533:mdt_reint_migrate()) Skipped 54 previous similar messages
LustreError: 22442:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x1eb:0x0]: rc = -5
LustreError: 22442:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 10 previous similar messages
LustreError: 22442:0:(llite_lib.c:3739:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 22442:0:(llite_lib.c:3739:ll_prep_inode()) Skipped 10 previous similar messages
Lustre: dir [0x240000403:0x491:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
LustreError: 19943:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0002: '4' migration was interrupted, run 'lfs migrate -m 0 -c 2 -H crush 4' to finish migration: rc = -1
LustreError: 19943:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) Skipped 4 previous similar messages
8[28005]: segfault at 8 ip 00007fef1cb4f7e8 sp 00007ffd5d2ca820 error 4 in ld-2.17.so[7fef1cb44000+22000]
7[29113]: segfault at 8 ip 00007fc1ba11f7e8 sp 00007ffeeaddf940 error 4 in ld-2.17.so[7fc1ba114000+22000]
LustreError: 5447:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000403:0x464:0x0]: rc = -5
LustreError: 5447:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 21 previous similar messages
LustreError: 5447:0:(llite_lib.c:3739:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 5447:0:(llite_lib.c:3739:ll_prep_inode()) Skipped 21 previous similar messages
Lustre: dir [0x280000403:0x370:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
0[7592]: segfault at 8 ip 00007fbaf6c667e8 sp 00007ffdcc998110 error 4 in ld-2.17.so[7fbaf6c5b000+22000]
LustreError: 8533:0:(file.c:262:ll_close_inode_openhandle()) lustre-clilmv-ffff880075cd2e98: inode [0x240000403:0x5c2:0x0] mdc close failed: rc = -116
LustreError: 10698:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0001: '11' migration was interrupted, run 'lfs migrate -m 2 -c 2 -H crush 11' to finish migration: rc = -1
LustreError: 10698:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) Skipped 2 previous similar messages
LustreError: 16365:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x77e:0x0]: rc = -5
LustreError: 16365:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 7 previous similar messages
LustreError: 16365:0:(llite_lib.c:3739:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 16365:0:(llite_lib.c:3739:ll_prep_inode()) Skipped 7 previous similar messages
Lustre: 3184:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x572:0x0] with magic=0xbd60bd0
Lustre: 3184:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 5 previous similar messages
LustreError: 80:0:(statahead.c:830:ll_statahead_interpret_work()) lustre: getattr callback for 19 [0x280000403:0x44a:0x0]: rc = -5
cat (23692) used greatest stack depth: 9824 bytes left
Lustre: mdt05_008: service thread pid 3183 was inactive for 40.118 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Pid: 3183, comm: mdt05_008 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x913/0xd50 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x259/0x890 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1b3/0x470 [mdt]
[<0>] mdt_object_check_lock+0xec/0x3c0 [mdt]
[<0>] mdt_object_stripes_lock+0xba/0x660 [mdt]
[<0>] mdt_reint_unlink+0x7a2/0x15b0 [mdt]
[<0>] mdt_reint_rec+0x87/0x240 [mdt]
[<0>] mdt_reint_internal+0x74c/0xbc0 [mdt]
[<0>] mdt_reint+0x67/0x150 [mdt]
[<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x273/0xcc0 [ptlrpc]
[<0>] ptlrpc_main+0xc7e/0x1690 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: mdt01_005: service thread pid 28334 was inactive for 40.165 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Pid: 28334, comm: mdt01_005 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x913/0xd50 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x259/0x890 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1b3/0x470 [mdt]
[<0>] mdt_object_lock+0x88/0x1c0 [mdt]
[<0>] mdt_getattr_name_lock+0xbf3/0x2bd0 [mdt]
[<0>] mdt_intent_getattr+0x2cc/0x4e0 [mdt]
[<0>] mdt_intent_opc.constprop.75+0x211/0xc50 [mdt]
[<0>] mdt_intent_policy+0x10d/0x470 [mdt]
[<0>] ldlm_lock_enqueue+0x34f/0x930 [ptlrpc]
[<0>] ldlm_handle_enqueue+0x507/0x1850 [ptlrpc]
[<0>] tgt_enqueue+0x68/0x240 [ptlrpc]
[<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x273/0xcc0 [ptlrpc]
[<0>] ptlrpc_main+0xc7e/0x1690 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
LustreError: 28128:0:(file.c:262:ll_close_inode_openhandle()) lustre-clilmv-ffff880075cd2e98: inode [0x200000404:0x696:0x0] mdc close failed: rc = -2
LustreError: 28128:0:(file.c:262:ll_close_inode_openhandle()) Skipped 3 previous similar messages
LustreError: 18575:0:(mdt_reint.c:2533:mdt_reint_migrate()) lustre-MDT0001: migrate [0x240000404:0x68f:0x0]/14 failed: rc = -2
LustreError: 18575:0:(mdt_reint.c:2533:mdt_reint_migrate()) Skipped 80 previous similar messages
LustreError: 12762:0:(statahead.c:830:ll_statahead_interpret_work()) lustre: getattr callback for 19 [0x240000404:0x6d9:0x0]: rc = -5
Lustre: mdt00_002: service thread pid 18513 was inactive for 72.048 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Pid: 18513, comm: mdt00_002 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x913/0xd50 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x259/0x890 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1b3/0x470 [mdt]
[<0>] mdt_object_lock+0x88/0x1c0 [mdt]
[<0>] mdt_getattr_name_lock+0xbf3/0x2bd0 [mdt]
[<0>] mdt_intent_getattr+0x2cc/0x4e0 [mdt]
[<0>] mdt_intent_opc.constprop.75+0x211/0xc50 [mdt]
[<0>] mdt_intent_policy+0x10d/0x470 [mdt]
[<0>] ldlm_lock_enqueue+0x34f/0x930 [ptlrpc]
[<0>] ldlm_handle_enqueue+0x507/0x1850 [ptlrpc]
[<0>] tgt_enqueue+0x68/0x240 [ptlrpc]
[<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x273/0xcc0 [ptlrpc]
[<0>] ptlrpc_main+0xc7e/0x1690 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: dir [0x200000404:0x7bb:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 3 previous similar messages
14[6591]: segfault at 0 ip (null) sp 00007ffd38007918 error 14 in 14[400000+6000]
LustreError: 18578:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0000: '14' migration was interrupted, run 'lfs migrate -m 2 -c 2 -H crush 14' to finish migration: rc = -1
LustreError: 18578:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) Skipped 4 previous similar messages
LustreError: 10009:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x555:0x0]: rc = -5
LustreError: 10009:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 25 previous similar messages
LustreError: 10009:0:(llite_lib.c:3739:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 10009:0:(llite_lib.c:3739:ll_prep_inode()) Skipped 25 previous similar messages
LustreError: 18437:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff880324245a40/0x3a98ba4dbe165fce lrc: 3/0,0 mode: CR/CR res: [0x200000403:0x776:0x0].0x0 bits 0xa/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x3a98ba4dbe165cb7 expref: 469 pid: 18531 timeout: 18132 lvb_type: 0
LustreError: 30772:0:(ldlm_lockd.c:2589:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1722977478 with bad export cookie 4222329493722503549
Lustre: lustre-MDT0000-mdc-ffff88009f0c2e98: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: lustre-MDT0000-mdc-ffff88009f0c2e98: operation mds_close to node 0@lo failed: rc = -107
LustreError: 18513:0:(ldlm_lockd.c:1498:ldlm_handle_enqueue()) ### lock on destroyed export ffff880151f22e98 ns: mdt-lustre-MDT0000_UUID lock: ffff88013d739300/0x3a98ba4dbe169145 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x5cf:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x3a98ba4dbe169129 expref: 59 pid: 18513 timeout: 0 lvb_type: 0
LustreError: Skipped 1 previous similar message
Lustre: mdt00_002: service thread pid 18513 completed after 98.253s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt05_008: service thread pid 3183 completed after 100.878s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt01_005: service thread pid 28334 completed after 83.613s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: lustre-MDT0000-mdc-ffff88009f0c2e98: This client was evicted by lustre-MDT0000; in progress operations using this service will fail.
LustreError: 12974:0:(llite_lib.c:2026:ll_md_setattr()) md_setattr fails: rc = -5
LustreError: 11468:0:(file.c:5695:ll_inode_revalidate_fini()) lustre: revalidate FID [0x280000404:0x77a:0x0] error: rc = -5
LustreError: 11468:0:(file.c:5695:ll_inode_revalidate_fini()) Skipped 1 previous similar message
Lustre: lustre-MDT0000-mdc-ffff88009f0c2e98: Connection restored to (at 0@lo)
LustreError: 18523:0:(ldlm_lockd.c:1498:ldlm_handle_enqueue()) ### lock on destroyed export ffff880151f22e98 ns: mdt-lustre-MDT0000_UUID lock: ffff880085264780/0x3a98ba4dbe20691b lrc: 3/0,0 mode: CR/CR res: [0x200000404:0x8f5:0x0].0x0 bits 0xa/0x0 rrc: 2 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x3a98ba4dbe20690d expref: 9 pid: 18523 timeout: 0 lvb_type: 0
LustreError: 11752:0:(ldlm_lockd.c:1498:ldlm_handle_enqueue()) ### lock on destroyed export ffff880151f22e98 ns: mdt-lustre-MDT0000_UUID lock: ffff880267cf7840/0x3a98ba4dbe205fe4 lrc: 3/0,0 mode: PR/PR res: [0x200000403:0x8a7:0x0].0x0 bits 0x12/0x0 rrc: 7 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x3a98ba4dbe205fd6 expref: 2 pid: 11752 timeout: 0 lvb_type: 0
LustreError: 11752:0:(ldlm_lockd.c:1498:ldlm_handle_enqueue()) Skipped 2 previous similar messages
Lustre: 29098:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000404:0x666:0x0] with magic=0xbd60bd0
Lustre: 29098:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 41 previous similar messages
11[12232]: segfault at 8 ip 00007fe2fa2417e8 sp 00007ffcbaf4c520 error 4 in ld-2.17.so[7fe2fa236000+22000]
7[16957]: segfault at 8 ip 00007f89145187e8 sp 00007ffc44bc7f30 error 4 in ld-2.17.so[7f891450d000+22000]
Lustre: dir [0x280000404:0x4f8:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 1 previous similar message
Lustre: dir [0x240000404:0x7b9:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 4 previous similar messages
18[1840]: segfault at 8 ip 00007f25e193a7e8 sp 00007ffccee47ef0 error 4 in ld-2.17.so[7f25e192f000+22000]
LustreError: 20848:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0002: '16' migration was interrupted, run 'lfs migrate -m 0 -c 1 -H crush 16' to finish migration: rc = -1
LustreError: 20848:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) Skipped 10 previous similar messages
LustreError: 2349:0:(file.c:262:ll_close_inode_openhandle()) lustre-clilmv-ffff88009f0c2e98: inode [0x200000404:0xa60:0x0] mdc close failed: rc = -116
LustreError: 2349:0:(file.c:262:ll_close_inode_openhandle()) Skipped 37 previous similar messages
LustreError: 3891:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x77e:0x0]: rc = -5
LustreError: 3891:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 6 previous similar messages
LustreError: 3891:0:(llite_lib.c:3739:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5
LustreError: 3891:0:(llite_lib.c:3739:ll_prep_inode()) Skipped 6 previous similar messages
Lustre: dir [0x240000404:0x99f:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 2 previous similar messages
5[20864]: segfault at 8 ip 00007f8a416b87e8 sp 00007ffff9bea200 error 4 in ld-2.17.so[7f8a416ad000+22000]
Lustre: 5162:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0xabb:0x0] with magic=0xbd60bd0
Lustre: 5162:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 5 previous similar messages
LustreError: 23321:0:(mdt_handler.c:777:mdt_pack_acl2body()) lustre-MDT0001: unable to read [0x240000404:0x90b:0x0] ACL: rc = -2
Link to test
racer test 1: racer on clients: centos-110.localnet DURATION=2700
LustreError: 11400:0:(mdd_dir.c:231:mdd_parent_fid()) ASSERTION( S_ISDIR(mdd_object_type(obj)) ) failed: lustre-MDD0000: FID [0x200000003:0xa:0x0] is not a directory type = 100000
LustreError: 11400:0:(mdd_dir.c:231:mdd_parent_fid()) LBUG
CPU: 5 PID: 11400 Comm: mdt_io02_001 Kdump: loaded Tainted: P OE ------------ 3.10.0-7.9-debug #2
Hardware name: Red Hat KVM, BIOS 1.16.0-3.module_el8.7.0+1218+f626c2ff 04/01/2014
Call Trace:
[<ffffffff817d93f8>] dump_stack+0x19/0x1b
[<ffffffffa0191a9d>] lbug_with_loc+0x4d/0xa0 [libcfs]
[<ffffffffa1217917>] mdd_parent_fid+0x3d7/0x3e0 [mdd]
[<ffffffffa1217d00>] mdd_is_parent+0xd0/0x1a0 [mdd]
[<ffffffffa1217fdc>] mdd_is_subdir+0x20c/0x250 [mdd]
[<ffffffffa12afb21>] mdt_reint_rename+0x1001/0x2af0 [mdt]
[<ffffffffa040d42e>] ? lu_ucred+0x1e/0x30 [obdclass]
[<ffffffffa12a4495>] ? mdt_ucred+0x15/0x20 [mdt]
[<ffffffffa12badf7>] mdt_reint_rec+0x87/0x240 [mdt]
[<ffffffffa128f84c>] mdt_reint_internal+0x74c/0xbc0 [mdt]
[<ffffffffa1297515>] ? mdt_thread_info_init+0xa5/0xc0 [mdt]
[<ffffffffa129a237>] mdt_reint+0x67/0x150 [mdt]
[<ffffffffa06f96ae>] tgt_request_handle+0x74e/0x1a50 [ptlrpc]
[<ffffffffa063cb5c>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc]
[<ffffffffa063e92e>] ptlrpc_main+0xc7e/0x1690 [ptlrpc]
[<ffffffff810dbb51>] ? put_prev_entity+0x31/0x400
[<ffffffffa063dcb0>] ? ptlrpc_wait_event+0x630/0x630 [ptlrpc]
[<ffffffff810ba114>] kthread+0xe4/0xf0
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
[<ffffffff817ede5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffff810ba030>] ? kthread_create_on_node+0x140/0x140
LustreError: 11401:0:(mdt_reint.c:2519:mdt_reint_migrate()) lustre-MDT0002: migrate [0x280000403:0x1:0x0]/19 failed: rc = -16
LustreError: 32434:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff880318064138: inode [0x280000403:0x12b:0x0] mdc close failed: rc = -116
LustreError: 11395:0:(mdt_reint.c:2519:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x2:0x0]/7 failed: rc = -114
LustreError: 535:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff880318064138: inode [0x200000403:0x120:0x0] mdc close failed: rc = -116
LustreError: 29725:0:(mdt_reint.c:2519:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x1:0x0]/15 failed: rc = -114
LustreError: 29725:0:(mdt_reint.c:2519:mdt_reint_migrate()) Skipped 4 previous similar messages
LustreError: 1271:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff880318064138: inode [0x200000403:0x144:0x0] mdc close failed: rc = -116
LustreError: 1271:0:(file.c:264:ll_close_inode_openhandle()) Skipped 3 previous similar messages
LustreError: 11416:0:(mdt_reint.c:2519:mdt_reint_migrate()) lustre-MDT0002: migrate [0x280000403:0x1:0x0]/16 failed: rc = -16
LustreError: 11416:0:(mdt_reint.c:2519:mdt_reint_migrate()) Skipped 7 previous similar messages
LustreError: 2835:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff880318064138: inode [0x280000404:0x14e:0x0] mdc close failed: rc = -116
LustreError: 2835:0:(file.c:264:ll_close_inode_openhandle()) Skipped 6 previous similar messages
10[2503]: segfault at 8 ip 00007f834b2837e8 sp 00007ffc73cf4840 error 4 in ld-2.17.so[7f834b278000+22000]
LustreError: 11406:0:(mdt_reint.c:2519:mdt_reint_migrate()) lustre-MDT0002: migrate [0x280000403:0x1:0x0]/10 failed: rc = -114
LustreError: 11406:0:(mdt_reint.c:2519:mdt_reint_migrate()) Skipped 7 previous similar messages
LustreError: 5485:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff880318064138: inode [0x200000404:0x1a9:0x0] mdc close failed: rc = -116
LustreError: 5485:0:(file.c:264:ll_close_inode_openhandle()) Skipped 3 previous similar messages
Lustre: 29629:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff8802645ed040 x1798710016803264/t4294970938(0) o101->22d52c1f-db61-48f8-b5df-37e27035433f@0@lo:282/0 lens 376/840 e 0 to 0 dl 1715383687 ref 1 fl Interpret:H/202/0 rc 0/0 job:'dd.0' uid:0 gid:0
Lustre: 11339:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000404:0x108:0x0] with magic=0xbd60bd0
LustreError: 32572:0:(mdt_reint.c:2519:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x1:0x0]/4 failed: rc = -16
LustreError: 32572:0:(mdt_reint.c:2519:mdt_reint_migrate()) Skipped 5 previous similar messages
LustreError: 11400:0:(mdt_reint.c:2519:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000404:0x1de:0x0]/16 failed: rc = -114
LustreError: 11400:0:(mdt_reint.c:2519:mdt_reint_migrate()) Skipped 4 previous similar messages
LustreError: 9972:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cb31b7e8: inode [0x280000404:0x1bb:0x0] mdc close failed: rc = -116
LustreError: 9972:0:(file.c:264:ll_close_inode_openhandle()) Skipped 3 previous similar messages
17[13542]: segfault at 0 ip (null) sp 00007ffe5c8973b8 error 14 in 17[400000+6000]
Lustre: 21631:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88028eb73240 x1798710018387840/t4294971831(0) o101->22d52c1f-db61-48f8-b5df-37e27035433f@0@lo:314/0 lens 376/864 e 0 to 0 dl 1715383719 ref 1 fl Interpret:H/202/0 rc 0/0 job:'dd.0' uid:0 gid:0
LustreError: 18117:0:(lcommon_cl.c:196:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x201:0x0]: rc = -5
LustreError: 18117:0:(llite_lib.c:3686:ll_prep_inode()) new_inode -fatal: rc -5
LustreError: 21825:0:(lcommon_cl.c:196:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x201:0x0]: rc = -5
LustreError: 21825:0:(llite_lib.c:3686:ll_prep_inode()) new_inode -fatal: rc -5
LustreError: 31142:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0000: '6' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush 6' to finish migration: rc = -1
LustreError: 31142:0:(mdt_reint.c:2519:mdt_reint_migrate()) lustre-MDT0000: migrate [0x280000403:0x1:0x0]/6 failed: rc = -1
LustreError: 31142:0:(mdt_reint.c:2519:mdt_reint_migrate()) Skipped 36 previous similar messages
Lustre: dir [0x280000404:0x1fb:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
LustreError: 25684:0:(lcommon_cl.c:196:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x201:0x0]: rc = -5
LustreError: 25684:0:(llite_lib.c:3686:ll_prep_inode()) new_inode -fatal: rc -5
LustreError: 25684:0:(statahead.c:830:ll_statahead_interpret_work()) lustre: getattr callback for 19 [0x280000404:0x201:0x0]: rc = -5
Lustre: 29011:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x3a9:0x0] with magic=0xbd60bd0
Lustre: 29011:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 1 previous similar message
LustreError: 30775:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0001: '5' migration was interrupted, run 'lfs migrate -m 0 -c 3 -H crush 5' to finish migration: rc = -1
11[1921]: segfault at 8 ip 00007f16e743d7e8 sp 00007fff3b8fe490 error 4 in ld-2.17.so[7f16e7432000+22000]
LustreError: 31338:0:(lcommon_cl.c:196:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x4d7:0x0]: rc = -5
LustreError: 31338:0:(lcommon_cl.c:196:cl_file_inode_init()) Skipped 3 previous similar messages
LustreError: 31338:0:(llite_lib.c:3686:ll_prep_inode()) new_inode -fatal: rc -5
LustreError: 31338:0:(llite_lib.c:3686:ll_prep_inode()) Skipped 3 previous similar messages
Lustre: 24556:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x455:0x0] with magic=0xbd60bd0
Lustre: 24556:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 1 previous similar message
Lustre: 526:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x4b4:0x0] with magic=0xbd60bd0
Lustre: 526:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 7 previous similar messages
Lustre: dir [0x200000404:0x4b3:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 3 previous similar messages
LustreError: 12662:0:(lcommon_cl.c:196:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x4d7:0x0]: rc = -5
LustreError: 12662:0:(lcommon_cl.c:196:cl_file_inode_init()) Skipped 2 previous similar messages
LustreError: 12662:0:(llite_lib.c:3686:ll_prep_inode()) new_inode -fatal: rc -5
LustreError: 12662:0:(llite_lib.c:3686:ll_prep_inode()) Skipped 2 previous similar messages
Lustre: 7370:0:(mdt_recovery.c:148:mdt_req_from_lrd()) @@@ restoring transno req@ffff88006f447840 x1798710023447424/t4294974709(0) o101->280e1d66-85db-412e-87b4-757828705bf3@0@lo:393/0 lens 376/840 e 0 to 0 dl 1715383798 ref 1 fl Interpret:H/202/0 rc 0/0 job:'dd.0' uid:0 gid:0
LustreError: 4449:0:(mdt_reint.c:2519:mdt_reint_migrate()) lustre-MDT0002: migrate [0x280000403:0x1:0x0]/2 failed: rc = -16
LustreError: 4449:0:(mdt_reint.c:2519:mdt_reint_migrate()) Skipped 35 previous similar messages
LustreError: 15105:0:(lcommon_cl.c:196:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x4d7:0x0]: rc = -5
LustreError: 15105:0:(lcommon_cl.c:196:cl_file_inode_init()) Skipped 1 previous similar message
LustreError: 15105:0:(llite_lib.c:3686:ll_prep_inode()) new_inode -fatal: rc -5
LustreError: 15105:0:(llite_lib.c:3686:ll_prep_inode()) Skipped 1 previous similar message
LustreError: 20064:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cb31b7e8: inode [0x280000404:0x1f4:0x0] mdc close failed: rc = -116
LustreError: 20064:0:(file.c:264:ll_close_inode_openhandle()) Skipped 2 previous similar messages
Lustre: 11344:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0x3ed:0x0] with magic=0xbd60bd0
Lustre: 11344:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 1 previous similar message
Lustre: 21810:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000403:0x40a:0x0] with magic=0xbd60bd0
Lustre: 21810:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 7 previous similar messages
LustreError: 26095:0:(lcommon_cl.c:196:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000403:0x3a1:0x0]: rc = -5
LustreError: 26095:0:(lcommon_cl.c:196:cl_file_inode_init()) Skipped 6 previous similar messages
LustreError: 26095:0:(llite_lib.c:3686:ll_prep_inode()) new_inode -fatal: rc -5
LustreError: 26095:0:(llite_lib.c:3686:ll_prep_inode()) Skipped 6 previous similar messages
LustreError: 1221:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cb31b7e8: inode [0x200000404:0x6d6:0x0] mdc close failed: rc = -116
LustreError: 1221:0:(file.c:264:ll_close_inode_openhandle()) Skipped 5 previous similar messages
LustreError: 21606:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0001: '1' migration was interrupted, run 'lfs migrate -m 0 -c 3 -H crush 1' to finish migration: rc = -1
LustreError: 11400:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0000: '1' migration was interrupted, run 'lfs migrate -m 0 -c 3 -H crush 1' to finish migration: rc = -1
LustreError: 23934:0:(statahead.c:830:ll_statahead_interpret_work()) lustre: getattr callback for 11 [0x200000403:0x547:0x0]: rc = -5
LustreError: 5898:0:(lcommon_cl.c:196:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000403:0x547:0x0]: rc = -5
LustreError: 5898:0:(lcommon_cl.c:196:cl_file_inode_init()) Skipped 7 previous similar messages
LustreError: 5898:0:(llite_lib.c:3686:ll_prep_inode()) new_inode -fatal: rc -5
LustreError: 5898:0:(llite_lib.c:3686:ll_prep_inode()) Skipped 7 previous similar messages
Lustre: dir [0x200000404:0x643:0x0] stripe 3 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 2 previous similar messages
LustreError: 10983:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0002: '18' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush 18' to finish migration: rc = -1
LustreError: 10983:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) Skipped 1 previous similar message
LustreError: 21606:0:(lustre_lmv.h:517:lmv_is_sane()) unknown layout LMV: magic=0xcd40cd0 count=2 index=1 hash=crush:0x82000003 version=1 migrate_offset=1 migrate_hash=fnv_1a_64:2 pool=
Lustre: dir [0x200000403:0x6f3:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
Lustre: 20322:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000404:0x556:0x0] with magic=0xbd60bd0
Lustre: 20322:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 23 previous similar messages
LustreError: 11398:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0000: '14' migration was interrupted, run 'lfs migrate -m 1 -c 3 -H crush 14' to finish migration: rc = -1
LustreError: 7714:0:(lov_object.c:1360:lov_layout_change()) lustre-clilov-ffff8802cb31b7e8: cannot apply new layout on [0x280000403:0x5c0:0x0] : rc = -5
LustreError: 7714:0:(vvp_io.c:1920:vvp_io_init()) lustre: refresh file layout [0x280000403:0x5c0:0x0] error -5.
LustreError: 12692:0:(lov_object.c:1360:lov_layout_change()) lustre-clilov-ffff8802cb31b7e8: cannot apply new layout on [0x200000403:0x722:0x0] : rc = -5
LustreError: 12692:0:(vvp_io.c:1920:vvp_io_init()) lustre: refresh file layout [0x200000403:0x722:0x0] error -5.
LustreError: 6067:0:(lov_object.c:1360:lov_layout_change()) lustre-clilov-ffff8802cb31b7e8: cannot apply new layout on [0x200000403:0x722:0x0] : rc = -5
LustreError: 6067:0:(lov_object.c:1360:lov_layout_change()) Skipped 1 previous similar message
Lustre: lustre-MDT0000: trigger partial OI scrub for RPC inconsistency, checking FID [0x200000404:0x6ab:0x0]/0xa): rc = 0
LustreError: 27343:0:(osd_index.c:221:__osd_xattr_load_by_oid()) lustre-MDT0000: can't get bonus, rc = -2
LustreError: 24868:0:(lov_object.c:1360:lov_layout_change()) lustre-clilov-ffff8802cb31b7e8: cannot apply new layout on [0x200000403:0x722:0x0] : rc = -5
LustreError: 24868:0:(lov_object.c:1360:lov_layout_change()) Skipped 2 previous similar messages
LustreError: 11400:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0000: '14' migration was interrupted, run 'lfs migrate -m 1 -c 3 -H crush 14' to finish migration: rc = -1
LustreError: 11400:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) Skipped 3 previous similar messages
LustreError: 301:0:(lov_object.c:1360:lov_layout_change()) lustre-clilov-ffff8802cb31b7e8: cannot apply new layout on [0x200000403:0x722:0x0] : rc = -5
LustreError: 301:0:(lov_object.c:1360:lov_layout_change()) Skipped 1 previous similar message
Lustre: 28483:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000403:0x3a5:0x0] with magic=0xbd60bd0
Lustre: 28483:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 29 previous similar messages
Lustre: lustre-MDT0001: trigger partial OI scrub for RPC inconsistency, checking FID [0x240000403:0x639:0x0]/0xa): rc = 0
LustreError: 854:0:(lcommon_cl.c:196:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x7aa:0x0]: rc = -5
LustreError: 854:0:(lcommon_cl.c:196:cl_file_inode_init()) Skipped 37 previous similar messages
LustreError: 854:0:(llite_lib.c:3686:ll_prep_inode()) new_inode -fatal: rc -5
LustreError: 854:0:(llite_lib.c:3686:ll_prep_inode()) Skipped 37 previous similar messages
LustreError: 31255:0:(lov_object.c:1360:lov_layout_change()) lustre-clilov-ffff8802cb31b7e8: cannot apply new layout on [0x280000403:0x5c0:0x0] : rc = -5
LustreError: 31255:0:(lov_object.c:1360:lov_layout_change()) Skipped 1 previous similar message
LustreError: 22558:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cb31b7e8: inode [0x280000404:0x3a8:0x0] mdc close failed: rc = -2
LustreError: 22558:0:(file.c:264:ll_close_inode_openhandle()) Skipped 4 previous similar messages
LustreError: 32651:0:(mdt_reint.c:2519:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x2:0x0]/12 failed: rc = -16
LustreError: 32651:0:(mdt_reint.c:2519:mdt_reint_migrate()) Skipped 102 previous similar messages
LustreError: 11404:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0001: '4' migration was interrupted, run 'lfs migrate -m 1 -c 1 -H crush 4' to finish migration: rc = -1
LustreError: 11404:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) Skipped 3 previous similar messages
LustreError: 13134:0:(statahead.c:830:ll_statahead_interpret_work()) lustre: getattr callback for 19 [0x240000403:0x619:0x0]: rc = -5
0[23495]: segfault at 8 ip 00007f819762b7e8 sp 00007ffd4e579490 error 4 in ld-2.17.so[7f8197620000+22000]
LustreError: 25692:0:(lov_object.c:1360:lov_layout_change()) lustre-clilov-ffff8802cb31b7e8: cannot apply new layout on [0x280000403:0x5c0:0x0] : rc = -5
Lustre: 21852:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0x851:0x0] with magic=0xbd60bd0
Lustre: 21852:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 41 previous similar messages
LustreError: 27272:0:(mdt_handler.c:777:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000403:0xbd7:0x0] ACL: rc = -2
LustreError: 18956:0:(vvp_io.c:1920:vvp_io_init()) lustre: refresh file layout [0x240000403:0x6c5:0x0] error -5.
LustreError: 7370:0:(mdt_xattr.c:415:mdt_dir_layout_update()) lustre-MDT0001: [0x240000404:0x875:0x0] migrate mdt count mismatch 2 != 1
LustreError: 32477:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0002: '11' migration was interrupted, run 'lfs migrate -m 0 -c 1 -H crush 11' to finish migration: rc = -1
LustreError: 32477:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) Skipped 6 previous similar messages
LustreError: 1853:0:(lov_object.c:1360:lov_layout_change()) lustre-clilov-ffff8802cb31b7e8: cannot apply new layout on [0x240000403:0x894:0x0] : rc = -5
LustreError: 1853:0:(lov_object.c:1360:lov_layout_change()) Skipped 1 previous similar message
LustreError: 1853:0:(vvp_io.c:1920:vvp_io_init()) lustre: refresh file layout [0x240000403:0x894:0x0] error -5.
LustreError: 29437:0:(mdt_xattr.c:415:mdt_dir_layout_update()) lustre-MDT0000: [0x200000403:0xa48:0x0] migrate mdt count mismatch 3 != 2
12[13147]: segfault at 8 ip 00007f48980837e8 sp 00007ffe7730bb70 error 4 in ld-2.17.so[7f4898078000+22000]
LustreError: 15918:0:(lcommon_cl.c:196:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000403:0x894:0x0]: rc = -5
LustreError: 15918:0:(lcommon_cl.c:196:cl_file_inode_init()) Skipped 59 previous similar messages
LustreError: 15918:0:(llite_lib.c:3686:ll_prep_inode()) new_inode -fatal: rc -5
LustreError: 15918:0:(llite_lib.c:3686:ll_prep_inode()) Skipped 59 previous similar messages
LustreError: 17362:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff880318064138: inode [0x200000403:0xc7f:0x0] mdc close failed: rc = -13
LustreError: 17362:0:(file.c:264:ll_close_inode_openhandle()) Skipped 10 previous similar messages
3[23889]: segfault at 8 ip 00007f427539f7e8 sp 00007ffc1fa31620 error 4 in ld-2.17.so[7f4275394000+22000]
LustreError: 11369:0:(mdd_object.c:3873:mdd_close()) lustre-MDD0000: failed to get lu_attr of [0x200000404:0xf81:0x0]: rc = -2
18[32193]: segfault at 0 ip (null) sp 00007ffd26a0a7a8 error 14 in 18[400000+6000]
LustreError: 21781:0:(mdt_xattr.c:415:mdt_dir_layout_update()) lustre-MDT0002: [0x280000404:0xd05:0x0] migrate mdt count mismatch 2 != 3
Lustre: 21810:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000404:0xd0c:0x0] with magic=0xbd60bd0
Lustre: 21810:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 17 previous similar messages
Lustre: mdt02_007: service thread pid 29011 was inactive for 40.034 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Pid: 28686, comm: mdt07_011 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x963/0xd00 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x259/0x870 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt]
[<0>] mdt_object_lock+0x88/0x1c0 [mdt]
[<0>] mdt_getattr_name_lock+0xbf3/0x2bd0 [mdt]
[<0>] mdt_intent_getattr+0x2cc/0x4e0 [mdt]
[<0>] mdt_intent_opc.constprop.76+0x211/0xc50 [mdt]
[<0>] mdt_intent_policy+0x10d/0x470 [mdt]
[<0>] ldlm_lock_enqueue+0x34f/0x930 [ptlrpc]
[<0>] ldlm_handle_enqueue+0x35e/0x1830 [ptlrpc]
[<0>] tgt_enqueue+0x68/0x240 [ptlrpc]
[<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc]
[<0>] ptlrpc_main+0xc7e/0x1690 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Pid: 29187, comm: mdt02_010 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x963/0xd00 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x259/0x870 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt]
[<0>] mdt_object_lock+0x88/0x1c0 [mdt]
[<0>] mdt_getattr_name_lock+0xbf3/0x2bd0 [mdt]
[<0>] mdt_intent_getattr+0x2cc/0x4e0 [mdt]
[<0>] mdt_intent_opc.constprop.76+0x211/0xc50 [mdt]
[<0>] mdt_intent_policy+0x10d/0x470 [mdt]
[<0>] ldlm_lock_enqueue+0x34f/0x930 [ptlrpc]
[<0>] ldlm_handle_enqueue+0x35e/0x1830 [ptlrpc]
[<0>] tgt_enqueue+0x68/0x240 [ptlrpc]
[<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc]
[<0>] ptlrpc_main+0xc7e/0x1690 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: mdt00_015: service thread pid 19936 was inactive for 40.128 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 2 previous similar messages
Pid: 29011, comm: mdt02_007 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x963/0xd00 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x259/0x870 [ptlrpc]
[<0>] mdt_object_lock_internal+0x1a9/0x420 [mdt]
[<0>] mdt_object_lock+0x88/0x1c0 [mdt]
[<0>] mdt_getattr_name_lock+0xbf3/0x2bd0 [mdt]
[<0>] mdt_intent_getattr+0x2cc/0x4e0 [mdt]
[<0>] mdt_intent_opc.constprop.76+0x211/0xc50 [mdt]
[<0>] mdt_intent_policy+0x10d/0x470 [mdt]
[<0>] ldlm_lock_enqueue+0x34f/0x930 [ptlrpc]
[<0>] ldlm_handle_enqueue+0x35e/0x1830 [ptlrpc]
[<0>] tgt_enqueue+0x68/0x240 [ptlrpc]
[<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc]
[<0>] ptlrpc_main+0xc7e/0x1690 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
Lustre: mdt_io00_003: service thread pid 3306 was inactive for 72.209 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 1 previous similar message
Lustre: mdt_io06_005: service thread pid 29387 was inactive for 74.186 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: mdt_io04_003: service thread pid 30775 was inactive for 76.085 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 2 previous similar messages
Lustre: mdt_io02_001: service thread pid 11400 was inactive for 74.249 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: Skipped 9 previous similar messages
LustreError: 11232:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8800a1f6d2c0/0x101eeb5a80835c98 lrc: 3/0,0 mode: PR/PR res: [0x200000403:0xf10:0x0].0x0 bits 0x1b/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x101eeb5a80835c7c expref: 612 pid: 27259 timeout: 18288 lvb_type: 0
Lustre: mdt02_010: service thread pid 29187 completed after 99.495s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt02_025: service thread pid 29629 completed after 99.436s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt03_010: service thread pid 29428 completed after 99.534s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt00_015: service thread pid 19936 completed after 99.528s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: 3306:0:(mdt_reint.c:2519:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x1:0x0]/11 failed: rc = -16
LustreError: 3306:0:(mdt_reint.c:2519:mdt_reint_migrate()) Skipped 150 previous similar messages
Lustre: mdt_io00_003: service thread pid 3306 completed after 105.759s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt02_007: service thread pid 29011 completed after 99.440s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt05_008: service thread pid 29509 completed after 99.419s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io04_007: service thread pid 23284 completed after 105.673s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: 11-0: lustre-MDT0000-mdc-ffff8802cb31b7e8: operation mds_reint to node 0@lo failed: rc = -107
Lustre: lustre-MDT0000-mdc-ffff8802cb31b7e8: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
Lustre: mdt03_014: service thread pid 8887 completed after 99.523s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt07_011: service thread pid 28686 completed after 99.549s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: 167-0: lustre-MDT0000-mdc-ffff8802cb31b7e8: This client was evicted by lustre-MDT0000; in progress operations using this service will fail.
LustreError: 1258:0:(llite_lib.c:2014:ll_md_setattr()) md_setattr fails: rc = -108
LustreError: 2878:0:(vvp_io.c:1920:vvp_io_init()) lustre: refresh file layout [0x200000403:0xf10:0x0] error -108.
LustreError: 29387:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0000: '3' migration was interrupted, run 'lfs migrate -m 2 -c 1 -H crush 3' to finish migration: rc = -1
LustreError: 29387:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) Skipped 4 previous similar messages
LustreError: 2914:0:(file.c:5660:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000403:0xf10:0x0] error: rc = -108
Lustre: mdt_io06_005: service thread pid 29387 completed after 105.559s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io04_003: service thread pid 30775 completed after 105.412s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: lustre-MDT0000-mdc-ffff8802cb31b7e8: Connection restored to (at 0@lo)
Lustre: mdt_io01_007: service thread pid 12391 completed after 105.064s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io05_004: service thread pid 10983 completed after 104.128s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io04_001: service thread pid 11406 completed after 103.464s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io03_007: service thread pid 24827 completed after 103.004s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io05_005: service thread pid 21606 completed after 103.024s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io01_005: service thread pid 29530 completed after 102.254s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io04_005: service thread pid 32651 completed after 101.633s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io07_004: service thread pid 29725 completed after 100.942s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_005: service thread pid 21845 completed after 100.849s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io02_001: service thread pid 11400 completed after 100.774s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io03_002: service thread pid 11404 completed after 100.113s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io03_001: service thread pid 11403 completed after 99.753s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io05_001: service thread pid 11409 completed after 99.732s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_001: service thread pid 11394 completed after 99.598s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io00_002: service thread pid 11395 completed after 98.617s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
Lustre: mdt_io02_005: service thread pid 10710 completed after 95.444s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: 19971:0:(lov_object.c:1360:lov_layout_change()) lustre-clilov-ffff8802cb31b7e8: cannot apply new layout on [0x240000403:0x6c5:0x0] : rc = -5
LustreError: 19971:0:(lov_object.c:1360:lov_layout_change()) Skipped 1 previous similar message
LustreError: 12707:0:(llite_nfs.c:446:ll_dir_get_parent_fid()) lustre: failure inode [0x240000403:0xbf1:0x0] get parent: rc = -116
Lustre: dir [0x200000404:0xf8b:0x0] stripe 3 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 34 previous similar messages
6[5489]: segfault at 8 ip 00007fbcc33b07e8 sp 00007ffc3ad7d5e0 error 4 in ld-2.17.so[7fbcc33a5000+22000]
Lustre: dir [0x200000403:0xe82:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 1 previous similar message
6[6903]: segfault at 8 ip 00007fed9e8387e8 sp 00007ffca2ebb850 error 4 in ld-2.17.so[7fed9e82d000+22000]
LustreError: 22503:0:(lcommon_cl.c:196:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000403:0x3553:0x0]: rc = -5
LustreError: 22503:0:(lcommon_cl.c:196:cl_file_inode_init()) Skipped 38 previous similar messages
LustreError: 22503:0:(llite_lib.c:3686:ll_prep_inode()) new_inode -fatal: rc -5
LustreError: 22503:0:(llite_lib.c:3686:ll_prep_inode()) Skipped 38 previous similar messages
Lustre: dir [0x280000404:0x1023:0x0] stripe 2 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 2 previous similar messages
LustreError: 25135:0:(file.c:264:ll_close_inode_openhandle()) lustre-clilmv-ffff8802cb31b7e8: inode [0x200000403:0x13b2:0x0] mdc close failed: rc = -2
LustreError: 25135:0:(file.c:264:ll_close_inode_openhandle()) Skipped 30 previous similar messages
Lustre: dir [0x240000404:0x103b:0x0] stripe 1 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 1 previous similar message
Lustre: 21895:0:(lod_lov.c:1438:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0x4f8:0x0] with magic=0xbd60bd0
Lustre: 21895:0:(lod_lov.c:1438:lod_parse_striping()) Skipped 23 previous similar messages
LustreError: 13134:0:(statahead.c:830:ll_statahead_interpret_work()) lustre: getattr callback for 2 [0x240000404:0x104b:0x0]: rc = -5
LustreError: 29316:0:(mdt_handler.c:777:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000405:0x642:0x0] ACL: rc = -2
LustreError: 29316:0:(mdt_handler.c:777:mdt_pack_acl2body()) Skipped 1 previous similar message
LustreError: 26893:0:(statahead.c:830:ll_statahead_interpret_work()) lustre: getattr callback for 2 [0x240000404:0x104b:0x0]: rc = -5
LustreError: 26893:0:(statahead.c:830:ll_statahead_interpret_work()) Skipped 5 previous similar messages
1[14838]: segfault at 0 ip (null) sp 00007ffee41b58a8 error 14 in 1[400000+6000]
11[22309]: segfault at 1456 ip 0000000000001456 sp 00007fffc6a0c268 error 14 in 11[400000+6000]
5[25482]: segfault at 8 ip 00007f6f828c27e8 sp 00007fff95f239c0 error 4 in ld-2.17.so[7f6f828b7000+22000]
Lustre: dir [0x240000403:0x10e8:0x0] stripe 3 readdir failed: -2, directory is partially accessed!
Lustre: Skipped 2 previous similar messages
LustreError: 2360:0:(mdd_orphans.c:281:mdd_orphan_delete()) lustre-MDD0002: could not delete orphan object [0x280000403:0x3788:0x0]: rc = -2
LustreError: 2360:0:(mdd_object.c:3927:mdd_close()) lustre-MDD0002: unable to delete [0x280000403:0x3788:0x0] from orphan list: rc = -2
LustreError: 29725:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) lustre-MDD0001: '7' migration was interrupted, run 'lfs migrate -m 2 -c 2 -H crush 7' to finish migration: rc = -1
LustreError: 29725:0:(mdd_dir.c:4472:mdd_migrate_cmd_check()) Skipped 16 previous similar messages
ptlrpc_watchdog_fire: 24 callbacks suppressed
Lustre: mdt06_000: service thread pid 11348 was inactive for 72.153 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Pid: 11348, comm: mdt06_000 3.10.0-7.9-debug #2 SMP Tue Feb 1 18:17:58 EST 2022
Call Trace:
[<0>] ldlm_completion_ast+0x963/0xd00 [ptlrpc]
[<0>] ldlm_cli_enqueue_local+0x259/0x870 [ptlrpc]
[<0>] mdt_object_pdo_lock+0x4d9/0x7e0 [mdt]
[<0>] mdt_parent_lock+0x76/0x2a0 [mdt]
[<0>] mdt_getattr_name_lock+0x17b4/0x2bd0 [mdt]
[<0>] mdt_getattr_name+0xc6/0x2d0 [mdt]
[<0>] tgt_request_handle+0x74e/0x1a50 [ptlrpc]
[<0>] ptlrpc_server_handle_request+0x26c/0xcb0 [ptlrpc]
[<0>] ptlrpc_main+0xc7e/0x1690 [ptlrpc]
[<0>] kthread+0xe4/0xf0
[<0>] ret_from_fork_nospec_begin+0x7/0x21
[<0>] 0xfffffffffffffffe
LustreError: 21799:0:(mdt_handler.c:777:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000405:0x748:0x0] ACL: rc = -2
LustreError: 21799:0:(mdt_handler.c:777:mdt_pack_acl2body()) Skipped 1 previous similar message
LustreError: 11232:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8802834ffc00/0x101eeb5a809ea166 lrc: 3/0,0 mode: PR/PR res: [0x200000405:0x6cc:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x101eeb5a809e9df4 expref: 458 pid: 8910 timeout: 18594 lvb_type: 0
LustreError: 11-0: lustre-MDT0000-mdc-ffff8802cb31b7e8: operation ldlm_enqueue to node 0@lo failed: rc = -107
LustreError: Skipped 1 previous similar message
Lustre: lustre-MDT0000-mdc-ffff8802cb31b7e8: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete
LustreError: 167-0: lustre-MDT0000-mdc-ffff8802cb31b7e8: This client was evicted by lustre-MDT0000; in progress operations using this service will fail.
Lustre: mdt06_000: service thread pid 11348 completed after 99.801s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources).
LustreError: 22957:0:(file.c:5660:ll_inode_revalidate_fini()) lustre: revalidate FID [0x240000403:0x10e8:0x0] error: rc = -5
LustreError: 22957:0:(file.c:5660:ll_inode_revalidate_fini()) Skipped 42 previous similar messages
LustreError: 19112:0:(llite_lib.c:2014:ll_md_setattr()) md_setattr fails: rc = -5
LustreError: 19112:0:(llite_lib.c:2014:ll_md_setattr()) Skipped 2 previous similar messages
LustreError: 20308:0:(mdc_request.c:1469:mdc_read_page()) lustre-MDT0000-mdc-ffff8802cb31b7e8: [0x200000400:0x66:0x0] lock enqueue fails: rc = -108
LustreError: 22971:0:(ldlm_resource.c:1128:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff8802cb31b7e8: namespace resource [0x200000007:0x1:0x0].0x0 (ffff8802eecfad40) refcount nonzero (1) after lock cleanup; forcing cleanup.
Lustre: lustre-MDT0000-mdc-ffff8802cb31b7e8: Connection restored to (at 0@lo)
LustreError: 5873:0:(ldlm_lockd.c:1499:ldlm_handle_enqueue()) ### lock on destroyed export ffff8802c870e678 ns: mdt-lustre-MDT0000_UUID lock: ffff88024f8a43c0/0x101eeb5a80a8326a lrc: 3/0,0 mode: PR/PR res: [0x200000403:0x17c5:0x0].0x0 bits 0x12/0x0 rrc: 4 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x101eeb5a80a8325c expref: 14 pid: 5873 timeout: 0 lvb_type: 0
Lustre: lustre-MDT0002: trigger partial OI scrub for RPC inconsistency, checking FID [0x280000404:0x1526:0x0]/0xa): rc = 0
LustreError: 11356:0:(mdd_object.c:3873:mdd_close()) lustre-MDD0000: failed to get lu_attr of [0x200000403:0x15d8:0x0]: rc = -2
LustreError: 11356:0:(mdd_object.c:3873:mdd_close()) Skipped 1 previous similar message
Link to test
racer test 1: racer on clients: centos-45.localnet DURATION=2700
LustreError: 22031:0:(mdd_dir.c:226:mdd_parent_fid()) ASSERTION( S_ISDIR(mdd_object_type(obj)) ) failed: lustre-MDD0000: FID [0x200000003:0xa:0x0] is not a directory type = 100000
LustreError: 22031:0:(mdd_dir.c:226:mdd_parent_fid()) LBUG
Pid: 22031, comm: mdt04_032 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa02c98bc>] libcfs_call_trace+0x8c/0xc0 [libcfs]
[<ffffffffa02c996c>] lbug_with_loc+0x4c/0xa0 [libcfs]
[<ffffffffa1169d74>] mdd_parent_fid+0x374/0x3b0 [mdd]
[<ffffffffa1169e80>] mdd_is_parent+0xd0/0x1a0 [mdd]
[<ffffffffa116a154>] mdd_is_subdir+0x204/0x240 [mdd]
[<ffffffffa11ea86a>] mdt_reint_rename+0xcea/0x2b50 [mdt]
[<ffffffffa11f55d0>] mdt_reint_rec+0x80/0x210 [mdt]
[<ffffffffa11cfbb0>] mdt_reint_internal+0x790/0xb70 [mdt]
[<ffffffffa11daf27>] mdt_reint+0x67/0x140 [mdt]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Lustre: lfs: using old ioctl(LL_IOC_LOV_GETSTRIPE) on [0x240000404:0x2:0x0], use llapi_layout_get_by_path()
Lustre: mdt04_010: service thread pid 20992 was inactive for 62.109 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: mdt04_014: service thread pid 21089 was inactive for 62.026 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one.
Lustre: mdt04_019: service thread pid 21763 was inactive for 62.026 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Pid: 21763, comm: mdt04_019 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 20927, comm: mdt04_008 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 10740, comm: mdt04_000 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 21755, comm: mdt04_018 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 14412, comm: mdt04_003 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 21012, comm: mdt04_012 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
Lustre: Skipped 1 previous similar message
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 20880, comm: mdt04_006 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
Lustre: mdt04_002: service thread pid 10742 was inactive for 62.438 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes:
Lustre: Skipped 6 previous similar messages
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 10742, comm: mdt04_002 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 20685, comm: mdt04_005 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 21753, comm: mdt04_017 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 21887, comm: mdt04_021 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 20951, comm: mdt04_009 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 21888, comm: mdt04_022 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
Pid: 21872, comm: mdt04_020 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa06ff740>] ldlm_completion_ast+0x430/0x860 [ptlrpc]
[<ffffffffa07002a9>] ldlm_cli_enqueue_local+0x259/0x830 [ptlrpc]
[<ffffffffa11d4973>] mdt_object_local_lock+0x523/0xb50 [mdt]
[<ffffffffa11d5010>] mdt_object_lock_internal+0x70/0x360 [mdt]
[<ffffffffa11d619a>] mdt_getattr_name_lock+0x92a/0x1c90 [mdt]
[<ffffffffa11dd6b5>] mdt_intent_getattr+0x2b5/0x480 [mdt]
[<ffffffffa11d2e47>] mdt_intent_opc+0x1b7/0xb30 [mdt]
[<ffffffffa11dad04>] mdt_intent_policy+0x1a4/0x360 [mdt]
[<ffffffffa06e6c63>] ldlm_lock_enqueue+0x353/0x9f0 [ptlrpc]
[<ffffffffa070ef56>] ldlm_handle_enqueue0+0xa46/0x15d0 [ptlrpc]
[<ffffffffa0795b02>] tgt_enqueue+0x62/0x210 [ptlrpc]
[<ffffffffa079a595>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa07414f0>] ptlrpc_server_handle_request+0x250/0xb10 [ptlrpc]
[<ffffffffa0745681>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
LustreError: 10608:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 192.168.123.145@tcp ns: mdt-lustre-MDT0002_UUID lock: ffff88008bbcfd80/0xf927fb7a7ec621f3 lrc: 3/0,0 mode: PR/PR res: [0x280000404:0x7:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT flags: 0x60200400000020 nid: 192.168.123.145@tcp remote: 0xf927fb7a7ec61ec7 expref: 30 pid: 10740 timeout: 40851 lvb_type: 0
LustreError: 20880:0:(ldlm_lockd.c:1412:ldlm_handle_enqueue0()) ### lock on destroyed export ffff8802e8e69800 ns: mdt-lustre-MDT0002_UUID lock: ffff880287ae2d80/0xf927fb7a7ec624cb lrc: 3/0,0 mode: PR/PR res: [0x280000403:0x1:0x0].0x0 bits 0x13/0x0 rrc: 32 type: IBT flags: 0x50200000000000 nid: 192.168.123.145@tcp remote: 0xf927fb7a7ec622f6 expref: 17 pid: 20880 timeout: 0 lvb_type: 0
LustreError: 11-0: lustre-MDT0002-mdc-ffff88026c728800: operation ldlm_enqueue to node 192.168.123.145@tcp failed: rc = -107
Lustre: lustre-MDT0002-mdc-ffff88026c728800: Connection to lustre-MDT0002 (at 192.168.123.145@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 1 previous similar message
Lustre: lustre-MDT0002: Connection restored to 192.168.123.145@tcp (at 192.168.123.145@tcp)
Lustre: Skipped 13 previous similar messages
LustreError: 167-0: lustre-MDT0002-mdc-ffff88026c728800: This client was evicted by lustre-MDT0002; in progress operations using this service will fail.
LustreError: 20558:0:(file.c:4630:ll_inode_revalidate_fini()) lustre: revalidate FID [0x280000403:0x1:0x0] error: rc = -5
LustreError: 20558:0:(file.c:4630:ll_inode_revalidate_fini()) Skipped 12 previous similar messages
LustreError: 20866:0:(llite_lib.c:2564:ll_prep_inode()) new_inode -fatal: rc -108
LustreError: 20942:0:(lmv_obd.c:1175:lmv_fid_alloc()) Can't alloc new fid, rc -19
LustreError: 20866:0:(llite_lib.c:2564:ll_prep_inode()) Skipped 3 previous similar messages
LustreError: 26682:0:(ldlm_resource.c:1147:ldlm_resource_complain()) lustre-MDT0002-mdc-ffff8802fe21d800: namespace resource [0x280000403:0x1:0x0].0x0 (ffff8800898badc0) refcount nonzero (2) after lock cleanup; forcing cleanup.
Lustre: 27919:0:(client.c:2209:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569203161/real 1569203161] req@ffff8802e0c79b40 x1645428536052032/t0(0) o36->lustre-MDT0000-mdc-ffff88026c728800@192.168.123.145@tcp:12/10 lens 488/512 e 0 to 1 dl 1569203315 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: lustre-MDT0000-mdc-ffff88026c728800: Connection to lustre-MDT0000 (at 192.168.123.145@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: lustre-MDT0000: Client f47f6d7f-05be-4 (at 192.168.123.145@tcp) reconnecting
Lustre: lustre-MDT0000: Connection restored to 192.168.123.145@tcp (at 192.168.123.145@tcp)
Lustre: Skipped 2 previous similar messages
Lustre: 27769:0:(client.c:2209:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569203164/real 1569203165] req@ffff8801eb03db40 x1645428536110592/t0(0) o36->lustre-MDT0000-mdc-ffff8802fe21d800@192.168.123.145@tcp:12/10 lens 488/512 e 0 to 1 dl 1569203319 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: lustre-MDT0000-mdc-ffff8802fe21d800: Connection to lustre-MDT0000 (at 192.168.123.145@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: lustre-MDT0000: Client 2b4aca3b-1bc9-4 (at 192.168.123.145@tcp) reconnecting
Lustre: lustre-OST0003-osc-ffff88026c728800: disconnect after 22s idle
Lustre: lustre-OST0003-osc-ffff88026c728800: reconnect after 119s idle
Lustre: lustre-OST0003: Connection restored to 192.168.123.145@tcp (at 192.168.123.145@tcp)
Lustre: Skipped 3 previous similar messages
Lustre: 6676:0:(client.c:2209:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569203318/real 1569203318] req@ffff880092daab40 x1645428536472832/t0(0) o36->lustre-MDT0001-mdc-ffff88026c728800@192.168.123.145@tcp:12/10 lens 488/512 e 0 to 1 dl 1569203472 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chmod.0'
Lustre: lustre-MDT0001-mdc-ffff88026c728800: Connection to lustre-MDT0001 (at 192.168.123.145@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: lustre-MDT0001: Client f47f6d7f-05be-4 (at 192.168.123.145@tcp) reconnecting
Lustre: 8861:0:(client.c:2209:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569203322/real 1569203322] req@ffff88026b93db40 x1645428536561984/t0(0) o36->lustre-MDT0000-mdc-ffff8802fe21d800@192.168.123.145@tcp:12/10 lens 488/512 e 0 to 1 dl 1569203476 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: lustre-MDT0000-mdc-ffff8802fe21d800: Connection to lustre-MDT0000 (at 192.168.123.145@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: lustre-MDT0000: Client 2b4aca3b-1bc9-4 (at 192.168.123.145@tcp) reconnecting
9[13717]: segfault at 8 ip 00007f9e26bb5958 sp 00007ffc04dfd330 error 4 in ld-2.17.so[7f9e26baa000+22000]
LustreError: 21905:0:(mdt_lvb.c:422:mdt_lvbo_fill()) lustre-MDT0000: small buffer size 448 for EA 520 (max_mdsize 520): rc = -34
LustreError: 16854:0:(namei.c:87:ll_set_inode()) Can not initialize inode [0x280000406:0x4e:0x0] without object type: valid = 0x100000001
LustreError: 16854:0:(llite_lib.c:2564:ll_prep_inode()) new_inode -fatal: rc -12
Lustre: 5750:0:(client.c:2209:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569203472/real 1569203472] req@ffff88028b4c8b40 x1645428537606016/t0(0) o36->lustre-MDT0000-mdc-ffff88026c728800@192.168.123.145@tcp:12/10 lens 488/512 e 0 to 1 dl 1569203626 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: lustre-MDT0000-mdc-ffff88026c728800: Connection to lustre-MDT0000 (at 192.168.123.145@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: lustre-MDT0000: Client f47f6d7f-05be-4 (at 192.168.123.145@tcp) reconnecting
Lustre: 18029:0:(client.c:2209:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569203509/real 1569203509] req@ffff8802d1614b40 x1645428538261184/t0(0) o36->lustre-MDT0002-mdc-ffff88026c728800@192.168.123.145@tcp:12/10 lens 488/512 e 0 to 1 dl 1569203664 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chmod.0'
Lustre: 18029:0:(client.c:2209:ptlrpc_expire_one_request()) Skipped 2 previous similar messages
Lustre: lustre-MDT0002-mdc-ffff88026c728800: Connection to lustre-MDT0002 (at 192.168.123.145@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 2 previous similar messages
Lustre: lustre-MDT0002: Client f47f6d7f-05be-4 (at 192.168.123.145@tcp) reconnecting
Lustre: Skipped 2 previous similar messages
Lustre: 7471:0:(client.c:2209:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569203632/real 1569203632] req@ffff8801f8dd8b40 x1645428538550464/t0(0) o36->lustre-MDT0001-mdc-ffff88026c728800@192.168.123.145@tcp:12/10 lens 488/512 e 0 to 1 dl 1569203786 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chmod.0'
Lustre: lustre-MDT0001-mdc-ffff88026c728800: Connection to lustre-MDT0001 (at 192.168.123.145@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: lustre-MDT0001: Client f47f6d7f-05be-4 (at 192.168.123.145@tcp) reconnecting
Lustre: lustre-MDT0001: Connection restored to 192.168.123.145@tcp (at 192.168.123.145@tcp)
Lustre: Skipped 12 previous similar messages
Lustre: 31006:0:(client.c:2209:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569203799/real 1569203799] req@ffff8802c94eab40 x1645428539640448/t0(0) o36->lustre-MDT0000-mdc-ffff8802fe21d800@192.168.123.145@tcp:12/10 lens 488/512 e 0 to 1 dl 1569203845 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chmod.0'
Lustre: 31006:0:(client.c:2209:ptlrpc_expire_one_request()) Skipped 7 previous similar messages
Lustre: lustre-MDT0000: Client 2b4aca3b-1bc9-4 (at 192.168.123.145@tcp) reconnecting
Lustre: Skipped 7 previous similar messages
16[16243]: segfault at 0 ip (null) sp 00007fff37e08c18 error 14 in 16[400000+6000]
Lustre: lustre-MDT0001-mdc-ffff8802fe21d800: Connection to lustre-MDT0001 (at 192.168.123.145@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 9 previous similar messages
19[20889]: segfault at 0 ip (null) sp 00007ffc938c0d18 error 14 in 19[400000+6000]
10[29084]: segfault at 8 ip 00007efc7f703958 sp 00007ffe0f54a6f0 error 4 in ld-2.17.so[7efc7f6f8000+22000]
LustreError: 1484:0:(namei.c:87:ll_set_inode()) Can not initialize inode [0x200000403:0x140:0x0] without object type: valid = 0x100000001
LustreError: 1484:0:(llite_lib.c:2564:ll_prep_inode()) new_inode -fatal: rc -12
Lustre: 10980:0:(client.c:2209:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569203869/real 1569203869] req@ffff8801eae27b40 x1645428540826432/t0(0) o36->lustre-MDT0000-mdc-ffff88026c728800@192.168.123.145@tcp:12/10 lens 488/512 e 0 to 1 dl 1569203915 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: 10980:0:(client.c:2209:ptlrpc_expire_one_request()) Skipped 8 previous similar messages
Lustre: lustre-MDT0000: Client f47f6d7f-05be-4 (at 192.168.123.145@tcp) reconnecting
Lustre: Skipped 8 previous similar messages
Lustre: 5839:0:(client.c:1444:after_reply()) @@@ resending request on EINPROGRESS req@ffff8802e10eeb40 x1645428541872384/t0(0) o10->lustre-OST0000-osc-ffff8802fe21d800@192.168.123.145@tcp:6/4 lens 440/432 e 0 to 0 dl 1569203980 ref 1 fl Rpc:RQU/2/0 rc 0/-115 job:'truncate.0'
LustreError: 17698:0:(namei.c:87:ll_set_inode()) Can not initialize inode [0x200000403:0x191:0x0] without object type: valid = 0x100000001
LustreError: 17698:0:(llite_lib.c:2564:ll_prep_inode()) new_inode -fatal: rc -12
Lustre: 5832:0:(client.c:1444:after_reply()) @@@ resending request on EINPROGRESS req@ffff88021a0b1b40 x1645428542058944/t0(0) o10->lustre-OST0000-osc-ffff88026c728800@192.168.123.145@tcp:6/4 lens 440/432 e 0 to 0 dl 1569203989 ref 1 fl Rpc:RQU/2/0 rc 0/-115 job:'dd.0'
5[15425]: segfault at 8 ip 00007f76fa593958 sp 00007ffd4a8812f0 error 4 in ld-2.17.so[7f76fa588000+22000]
13[19221]: segfault at 0 ip (null) sp 00007ffffe47e788 error 14 in 13[400000+6000]
Lustre: lustre-MDT0000-mdc-ffff88026c728800: Connection to lustre-MDT0000 (at 192.168.123.145@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 11 previous similar messages
Lustre: 22476:0:(client.c:1444:after_reply()) @@@ resending request on EINPROGRESS req@ffff88008e3e9b40 x1645428542902336/t0(0) o36->lustre-MDT0002-mdc-ffff88026c728800@192.168.123.145@tcp:12/10 lens 504/456 e 0 to 0 dl 1569204049 ref 2 fl Rpc:RQU/2/0 rc 0/-115 job:'rm.0'
Lustre: 22476:0:(client.c:1444:after_reply()) Skipped 1 previous similar message
16[1984]: segfault at 0 ip (null) sp 00007fff8eaff548 error 14 in 16 (deleted)[400000+6000]
Lustre: 21202:0:(client.c:2209:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569204048/real 1569204048] req@ffff88008fc71b40 x1645428543442816/t0(0) o36->lustre-MDT0002-mdc-ffff8802fe21d800@192.168.123.145@tcp:12/10 lens 488/512 e 0 to 1 dl 1569204094 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chmod.0'
Lustre: 21202:0:(client.c:2209:ptlrpc_expire_one_request()) Skipped 7 previous similar messages
Lustre: lustre-MDT0002: Client 2b4aca3b-1bc9-4 (at 192.168.123.145@tcp) reconnecting
Lustre: Skipped 7 previous similar messages
Lustre: 5839:0:(client.c:1444:after_reply()) @@@ resending request on EINPROGRESS req@ffff880256269b40 x1645428544820544/t0(0) o10->lustre-OST0000-osc-ffff8802fe21d800@192.168.123.145@tcp:6/4 lens 440/432 e 0 to 0 dl 1569204189 ref 1 fl Rpc:RQU/2/0 rc 0/-115 job:'dd.0'
Lustre: 5839:0:(client.c:1444:after_reply()) Skipped 11 previous similar messages
Lustre: 23590:0:(client.c:1444:after_reply()) @@@ resending request on EINPROGRESS req@ffff88029054db40 x1645428545622400/t0(0) o36->lustre-MDT0002-mdc-ffff88026c728800@192.168.123.145@tcp:12/10 lens 496/440 e 0 to 0 dl 1569204251 ref 2 fl Rpc:RQU/2/0 rc 0/-115 job:'setfattr.0'
Lustre: lustre-MDT0001-mdc-ffff88026c728800: Connection to lustre-MDT0001 (at 192.168.123.145@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 15 previous similar messages
Lustre: 4726:0:(client.c:1444:after_reply()) @@@ resending request on EINPROGRESS req@ffff88008e38cb40 x1645428546181312/t0(0) o36->lustre-MDT0000-mdc-ffff88026c728800@192.168.123.145@tcp:12/10 lens 488/456 e 0 to 0 dl 1569204295 ref 2 fl Rpc:RQU/2/0 rc 0/-115 job:'chmod.0'
Lustre: 4726:0:(client.c:1444:after_reply()) Skipped 11 previous similar messages
LustreError: 10608:0:(ldlm_lockd.c:261:expired_lock_main()) ### lock callback timer expired after 99s: evicting client at 192.168.123.145@tcp ns: filter-lustre-OST0000_UUID lock: ffff8802c3e5ed80/0xf927fb7a7ed8f415 lrc: 3/0,0 mode: PW/PW res: [0x2c0000400:0xb4:0x0].0x0 rrc: 6 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) flags: 0x60000480030020 nid: 192.168.123.145@tcp remote: 0xf927fb7a7ed8f335 expref: 49 pid: 28651 timeout: 41988 lvb_type: 0
LustreError: 10608:0:(ldlm_lockd.c:261:expired_lock_main()) Skipped 1 previous similar message
LustreError: 167-0: lustre-OST0000-osc-ffff88026c728800: This client was evicted by lustre-OST0000; in progress operations using this service will fail.
LustreError: Skipped 1 previous similar message
LustreError: 26123:0:(ldlm_resource.c:1147:ldlm_resource_complain()) lustre-OST0000-osc-ffff88026c728800: namespace resource [0x2c0000400:0xb4:0x0].0x0 (ffff88025f11b8c0) refcount nonzero (1) after lock cleanup; forcing cleanup.
LustreError: 26123:0:(ldlm_resource.c:1147:ldlm_resource_complain()) Skipped 1 previous similar message
LustreError: 7768:0:(ofd_obd.c:586:ofd_destroy_export()) lustre-OST0000: cli f47f6d7f-05be-4/ffff880286cd3800 has 1703936 pending on destroyed export
LustreError: 7768:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_destroy_export: tot_granted 39059456 != fo_tot_granted 40763392
LustreError: 7768:0:(tgt_grant.c:251:tgt_grant_sanity_check()) ofd_destroy_export: tot_pending 1703936 != fo_tot_pending 3407872
LustreError: 19676:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_statfs: tot_granted 40501248 != fo_tot_granted 42205184
LustreError: 19676:0:(tgt_grant.c:248:tgt_grant_sanity_check()) Skipped 1 previous similar message
LustreError: 19676:0:(tgt_grant.c:251:tgt_grant_sanity_check()) ofd_statfs: tot_pending 3670016 != fo_tot_pending 5373952
LustreError: 19676:0:(tgt_grant.c:251:tgt_grant_sanity_check()) Skipped 1 previous similar message
LustreError: 19676:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_statfs: tot_granted 56885248 != fo_tot_granted 58589184
LustreError: 19676:0:(tgt_grant.c:251:tgt_grant_sanity_check()) ofd_statfs: tot_pending 12189696 != fo_tot_pending 13893632
LustreError: 19676:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_statfs: tot_granted 61341696 != fo_tot_granted 63045632
LustreError: 19676:0:(tgt_grant.c:251:tgt_grant_sanity_check()) ofd_statfs: tot_pending 20840448 != fo_tot_pending 22544384
LustreError: 19676:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_statfs: tot_granted 62128128 != fo_tot_granted 63832064
LustreError: 19676:0:(tgt_grant.c:251:tgt_grant_sanity_check()) ofd_statfs: tot_pending 15597568 != fo_tot_pending 17301504
LustreError: 19676:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_statfs: tot_granted 49020928 != fo_tot_granted 50724864
LustreError: 19676:0:(tgt_grant.c:248:tgt_grant_sanity_check()) Skipped 5 previous similar messages
LustreError: 19676:0:(tgt_grant.c:251:tgt_grant_sanity_check()) ofd_statfs: tot_pending 11927552 != fo_tot_pending 13631488
LustreError: 19676:0:(tgt_grant.c:251:tgt_grant_sanity_check()) Skipped 5 previous similar messages
LustreError: 14122:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_statfs: tot_granted 38666240 != fo_tot_granted 40370176
LustreError: 14122:0:(tgt_grant.c:248:tgt_grant_sanity_check()) Skipped 10 previous similar messages
LustreError: 14122:0:(tgt_grant.c:251:tgt_grant_sanity_check()) ofd_statfs: tot_pending 3407872 != fo_tot_pending 5111808
LustreError: 14122:0:(tgt_grant.c:251:tgt_grant_sanity_check()) Skipped 10 previous similar messages
Lustre: 29469:0:(client.c:2209:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569204305/real 1569204305] req@ffff8802cc122b40 x1645428546500736/t0(0) o36->lustre-MDT0000-mdc-ffff8802fe21d800@192.168.123.145@tcp:12/10 lens 488/512 e 0 to 1 dl 1569204351 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chmod.0'
Lustre: 29469:0:(client.c:2209:ptlrpc_expire_one_request()) Skipped 13 previous similar messages
Lustre: lustre-MDT0000: Client 2b4aca3b-1bc9-4 (at 192.168.123.145@tcp) reconnecting
Lustre: Skipped 13 previous similar messages
Lustre: lustre-MDT0000: Connection restored to 192.168.123.145@tcp (at 192.168.123.145@tcp)
Lustre: Skipped 79 previous similar messages
LustreError: 14124:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_statfs: tot_granted 42205184 != fo_tot_granted 43909120
LustreError: 14124:0:(tgt_grant.c:248:tgt_grant_sanity_check()) Skipped 18 previous similar messages
LustreError: 14124:0:(tgt_grant.c:251:tgt_grant_sanity_check()) ofd_statfs: tot_pending 1703936 != fo_tot_pending 3407872
LustreError: 14124:0:(tgt_grant.c:251:tgt_grant_sanity_check()) Skipped 18 previous similar messages
LustreError: 14124:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_statfs: tot_granted 57671680 != fo_tot_granted 59375616
LustreError: 14124:0:(tgt_grant.c:248:tgt_grant_sanity_check()) Skipped 39 previous similar messages
LustreError: 14124:0:(tgt_grant.c:251:tgt_grant_sanity_check()) ofd_statfs: tot_pending 15335424 != fo_tot_pending 17039360
LustreError: 14124:0:(tgt_grant.c:251:tgt_grant_sanity_check()) Skipped 39 previous similar messages
Lustre: 4762:0:(client.c:1444:after_reply()) @@@ resending request on EINPROGRESS req@ffff8802cd7a5b40 x1645428550566976/t0(0) o36->lustre-MDT0001-mdc-ffff8802fe21d800@192.168.123.145@tcp:12/10 lens 488/456 e 0 to 0 dl 1569204550 ref 2 fl Rpc:RQU/2/0 rc 0/-115 job:'truncate.0'
Lustre: 4762:0:(client.c:1444:after_reply()) Skipped 9 previous similar messages
LustreError: 14124:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_statfs: tot_granted 38010880 != fo_tot_granted 39714816
LustreError: 14124:0:(tgt_grant.c:248:tgt_grant_sanity_check()) Skipped 72 previous similar messages
LustreError: 14124:0:(tgt_grant.c:251:tgt_grant_sanity_check()) ofd_statfs: tot_pending 1966080 != fo_tot_pending 3670016
LustreError: 14124:0:(tgt_grant.c:251:tgt_grant_sanity_check()) Skipped 72 previous similar messages
LustreError: 22009:0:(mdt_open.c:1232:mdt_cross_open()) lustre-MDT0001: [0x240000403:0x47b:0x0] doesn't exist!: rc = -14
LustreError: 21019:0:(mdt_open.c:1232:mdt_cross_open()) lustre-MDT0001: [0x240000403:0x47b:0x0] doesn't exist!: rc = -14
Lustre: lustre-MDT0000-mdc-ffff88026c728800: Connection to lustre-MDT0000 (at 192.168.123.145@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 68 previous similar messages
LustreError: 14124:0:(tgt_grant.c:248:tgt_grant_sanity_check()) ofd_statfs: tot_granted 77201408 != fo_tot_granted 78905344
LustreError: 14124:0:(tgt_grant.c:248:tgt_grant_sanity_check()) Skipped 151 previous similar messages
LustreError: 14124:0:(tgt_grant.c:251:tgt_grant_sanity_check()) ofd_statfs: tot_pending 8650752 != fo_tot_pending 10354688
LustreError: 14124:0:(tgt_grant.c:251:tgt_grant_sanity_check()) Skipped 151 previous similar messages
Lustre: 22302:0:(client.c:2209:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1569204851/real 1569204851] req@ffff88024a7b3b40 x1645428556526784/t0(0) o36->lustre-MDT0001-mdc-ffff8802fe21d800@192.168.123.145@tcp:12/10 lens 488/512 e 0 to 1 dl 1569204865 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: 22302:0:(client.c:2209:ptlrpc_expire_one_request()) Skipped 95 previous similar messages
Lustre: lustre-MDT0001: Client 2b4aca3b-1bc9-4 (at 192.168.123.145@tcp) reconnecting
Lustre: Skipped 92 previous similar messages
LustreError: 27301:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff880088fa7b40 x1645428557627584/t0(0) o37->2b4aca3b-1bc9-4@192.168.123.145@tcp:117/0 lens 448/440 e 0 to 0 dl 1569204952 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
Lustre: lustre-MDT0002: Connection restored to 192.168.123.145@tcp (at 192.168.123.145@tcp)
Lustre: Skipped 231 previous similar messages
LustreError: 29239:0:(namei.c:87:ll_set_inode()) Can not initialize inode [0x200000403:0x73c:0x0] without object type: valid = 0x100000001
LustreError: 29239:0:(llite_lib.c:2564:ll_prep_inode()) new_inode -fatal: rc -12
LustreError: 21120:0:(mdt_open.c:1232:mdt_cross_open()) lustre-MDT0002: [0x280000405:0x1a82:0x0] doesn't exist!: rc = -14
Externally reported by onyx-68 boilpot email
racer test 1: racer on clients: centos-50.localnet DURATION=2700
LustreError: 27019:0:(mdd_dir.c:226:mdd_parent_fid()) ASSERTION( S_ISDIR(mdd_object_type(obj)) ) failed: lustre-MDD0000: FID [0x200000003:0xa:0x0] is not a directory type = 100000
LustreError: 27019:0:(mdd_dir.c:226:mdd_parent_fid()) LBUG
Pid: 27019, comm: mdt04_023 3.10.0-7.6-debug #1 SMP Wed Nov 7 21:55:08 EST 2018
Call Trace:
[<ffffffffa02b98bc>] libcfs_call_trace+0x8c/0xc0 [libcfs]
[<ffffffffa02b996c>] lbug_with_loc+0x4c/0xa0 [libcfs]
[<ffffffffa113dd44>] mdd_parent_fid+0x374/0x3b0 [mdd]
[<ffffffffa113de50>] mdd_is_parent+0xd0/0x1a0 [mdd]
[<ffffffffa113e124>] mdd_is_subdir+0x204/0x240 [mdd]
[<ffffffffa11d67da>] mdt_reint_rename+0xcea/0x2b50 [mdt]
[<ffffffffa11e1540>] mdt_reint_rec+0x80/0x210 [mdt]
[<ffffffffa11bbb50>] mdt_reint_internal+0x780/0xb50 [mdt]
[<ffffffffa11c6ec7>] mdt_reint+0x67/0x140 [mdt]
[<ffffffffa074e4e5>] tgt_request_handle+0x915/0x15c0 [ptlrpc]
[<ffffffffa06f1458>] ptlrpc_server_handle_request+0x258/0xb20 [ptlrpc]
[<ffffffffa06f5601>] ptlrpc_main+0xca1/0x2290 [ptlrpc]
[<ffffffff810b4ed4>] kthread+0xe4/0xf0
[<ffffffff817c4c5d>] ret_from_fork_nospec_begin+0x7/0x21
[<ffffffffffffffff>] 0xffffffffffffffff
18[29822]: segfault at 8 ip 00007f2d575a6958 sp 00007fff24e23c60 error 4 in ld-2.17.so[7f2d5759b000+22000]
LustreError: 26443:0:(mdt_lvb.c:422:mdt_lvbo_fill()) lustre-MDT0000: small buffer size 496 for EA 520 (max_mdsize 520): rc = -34
Lustre: lfs: using old ioctl(LL_IOC_LOV_GETSTRIPE) on [0x200000404:0x28:0x0], use llapi_layout_get_by_path()
Lustre: 26848:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568137176/real 1568137176] req@ffff88027495db40 x1644310816899328/t0(0) o36->lustre-MDT0001-mdc-ffff8802847ca800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568137220 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: lustre-MDT0001-mdc-ffff8802847ca800: Connection to lustre-MDT0001 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: lustre-MDT0001: Client 47120ec6-bd57-4 (at 192.168.123.150@tcp) reconnecting
Lustre: lustre-MDT0001: Connection restored to 192.168.123.150@tcp (at 192.168.123.150@tcp)
Lustre: Skipped 26 previous similar messages
Lustre: 26858:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568137177/real 1568137177] req@ffff88028172eb40 x1644310816912960/t0(0) o36->lustre-MDT0002-mdc-ffff8802847ca800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568137221 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chmod.0'
Lustre: 26858:0:(client.c:2210:ptlrpc_expire_one_request()) Skipped 1 previous similar message
Lustre: lustre-MDT0002-mdc-ffff8802847ca800: Connection to lustre-MDT0002 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 1 previous similar message
Lustre: lustre-MDT0002: Client 47120ec6-bd57-4 (at 192.168.123.150@tcp) reconnecting
Lustre: Skipped 1 previous similar message
Lustre: 29046:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568137188/real 1568137188] req@ffff880274000b40 x1644310817040768/t0(0) o36->lustre-MDT0001-mdc-ffff880280fae800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568137232 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: lustre-MDT0001-mdc-ffff880280fae800: Connection to lustre-MDT0001 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: lustre-MDT0001: Client 0148b1a3-d595-4 (at 192.168.123.150@tcp) reconnecting
Lustre: 26847:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568137221/real 1568137221] req@ffff880280b3cb40 x1644310817444992/t0(0) o36->lustre-MDT0002-mdc-ffff880280fae800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568137265 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: lustre-MDT0002-mdc-ffff880280fae800: Connection to lustre-MDT0002 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: lustre-MDT0002: Client 0148b1a3-d595-4 (at 192.168.123.150@tcp) reconnecting
Lustre: 1909:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568137225/real 1568137225] req@ffff8802873c5b40 x1644310817510656/t0(0) o36->lustre-MDT0001-mdc-ffff8802847ca800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568137269 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: 1909:0:(client.c:2210:ptlrpc_expire_one_request()) Skipped 2 previous similar messages
Lustre: lustre-MDT0001-mdc-ffff8802847ca800: Connection to lustre-MDT0001 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 2 previous similar messages
Lustre: lustre-MDT0001: Client 47120ec6-bd57-4 (at 192.168.123.150@tcp) reconnecting
Lustre: Skipped 2 previous similar messages
Lustre: 10674:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568137268/real 1568137269] req@ffff880268975b40 x1644310818186176/t0(0) o36->lustre-MDT0002-mdc-ffff8802847ca800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568137312 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: 10674:0:(client.c:2210:ptlrpc_expire_one_request()) Skipped 1 previous similar message
Lustre: lustre-MDT0002-mdc-ffff8802847ca800: Connection to lustre-MDT0002 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 1 previous similar message
Lustre: lustre-MDT0002: Client 47120ec6-bd57-4 (at 192.168.123.150@tcp) reconnecting
Lustre: Skipped 1 previous similar message
Lustre: lustre-MDT0002: Connection restored to 192.168.123.150@tcp (at 192.168.123.150@tcp)
Lustre: Skipped 16 previous similar messages
Lustre: 2685:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568137314/real 1568137314] req@ffff880285fe4b40 x1644310818874240/t0(0) o36->lustre-MDT0001-mdc-ffff8802847ca800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568137358 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: 2685:0:(client.c:2210:ptlrpc_expire_one_request()) Skipped 5 previous similar messages
Lustre: lustre-MDT0001-mdc-ffff8802847ca800: Connection to lustre-MDT0001 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 5 previous similar messages
Lustre: lustre-MDT0001: Client 47120ec6-bd57-4 (at 192.168.123.150@tcp) reconnecting
Lustre: Skipped 5 previous similar messages
Lustre: 11228:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568137361/real 1568137361] req@ffff88026d67cb40 x1644310819737920/t0(0) o36->lustre-MDT0001-mdc-ffff8802847ca800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568137405 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chmod.0'
Lustre: 11228:0:(client.c:2210:ptlrpc_expire_one_request()) Skipped 5 previous similar messages
Lustre: lustre-MDT0001-mdc-ffff8802847ca800: Connection to lustre-MDT0001 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 5 previous similar messages
Lustre: lustre-MDT0001: Client 47120ec6-bd57-4 (at 192.168.123.150@tcp) reconnecting
Lustre: Skipped 5 previous similar messages
14[23748]: segfault at 8 ip 00007f115c12d958 sp 00007ffce0d4b9e0 error 4 in ld-2.17.so[7f115c122000+22000]
Lustre: lustre-MDT0001: Connection restored to 192.168.123.150@tcp (at 192.168.123.150@tcp)
Lustre: Skipped 35 previous similar messages
7[30631]: segfault at 8 ip 00007efd5540b958 sp 00007fffc9ab88a0 error 4 in ld-2.17.so[7efd55400000+22000]
3[4042]: segfault at 8 ip 00007fb7e3274958 sp 00007ffe549e9ef0 error 4 in ld-2.17.so[7fb7e3269000+22000]
Lustre: 22480:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568137436/real 1568137436] req@ffff8802ffe65b40 x1644310821327040/t0(0) o36->lustre-MDT0000-mdc-ffff880280fae800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568137480 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chmod.0'
Lustre: 22480:0:(client.c:2210:ptlrpc_expire_one_request()) Skipped 10 previous similar messages
Lustre: lustre-MDT0000-mdc-ffff880280fae800: Connection to lustre-MDT0000 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 10 previous similar messages
Lustre: lustre-MDT0000: Client 0148b1a3-d595-4 (at 192.168.123.150@tcp) reconnecting
Lustre: Skipped 10 previous similar messages
2[30850]: segfault at 1366 ip 0000000000001366 sp 00007ffde0fa3368 error 14 in 2[400000+6000]
17[14249]: segfault at 8 ip 00007fdcd505f958 sp 00007ffe1c16a0d0 error 4 in ld-2.17.so[7fdcd5054000+22000]
8[17137]: segfault at 8 ip 00007fcaf52a4958 sp 00007ffe5f07c200 error 4 in ld-2.17.so[7fcaf5299000+22000]
Lustre: 1240:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568137573/real 1568137573] req@ffff8802934e6b40 x1644310824685696/t0(0) o36->lustre-MDT0000-mdc-ffff880280fae800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568137617 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chmod.0'
Lustre: 1240:0:(client.c:2210:ptlrpc_expire_one_request()) Skipped 17 previous similar messages
Lustre: lustre-MDT0000-mdc-ffff880280fae800: Connection to lustre-MDT0000 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 17 previous similar messages
Lustre: lustre-MDT0000: Client 0148b1a3-d595-4 (at 192.168.123.150@tcp) reconnecting
Lustre: Skipped 17 previous similar messages
17[19259]: segfault at 8 ip 00007f00bdb70958 sp 00007ffcbe37bf10 error 4 in ld-2.17.so[7f00bdb65000+22000]
6[21218]: segfault at 8 ip 00007f2570b1f958 sp 00007ffd34f025f0 error 4 in ld-2.17.so[7f2570b14000+22000]
2[31650]: segfault at 8 ip 00007fbbf576f958 sp 00007fffec686a60 error 4 in ld-2.17.so[7fbbf5764000+22000]
14[32430]: segfault at 8 ip 00007fd80efee958 sp 00007ffc3ac10a90 error 4 in ld-2.17.so[7fd80efe3000+22000]
Lustre: lustre-MDT0001: Connection restored to 192.168.123.150@tcp (at 192.168.123.150@tcp)
Lustre: Skipped 73 previous similar messages
3[18059]: segfault at 8 ip 00007fb88d300958 sp 00007ffc48539370 error 4 in ld-2.17.so[7fb88d2f5000+22000]
6[1879]: segfault at 0 ip (null) sp 00007ffc9238dff8 error 14 in 6[400000+6000]
14[3028]: segfault at 0 ip (null) sp 00007ffcff3cf008 error 14 in 14[400000+6000]
4[7359]: segfault at 8 ip 00007f6c415a1958 sp 00007ffe367aa390 error 4 in ld-2.17.so[7f6c41596000+22000]
LustreError: 26626:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff8802755b8b40 x1644310829972544/t0(0) o37->0148b1a3-d595-4@192.168.123.150@tcp:518/0 lens 448/440 e 0 to 0 dl 1568137783 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
1[8447]: segfault at 8 ip 00007f08ef52f958 sp 00007ffe7fc5d4b0 error 4 in ld-2.17.so[7f08ef524000+22000]
4[7895]: segfault at 8 ip 00007fafed2ce958 sp 00007ffcae55d2c0 error 4 in ld-2.17.so[7fafed2c3000+22000]
1[18609]: segfault at 0 ip (null) sp 00007ffe4dde0cc8 error 14 in 1 (deleted)[400000+6000]
LustreError: 25745:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff880269ffbb40 x1644310831174912/t0(0) o37->0148b1a3-d595-4@192.168.123.150@tcp:594/0 lens 448/440 e 0 to 0 dl 1568137859 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
LustreError: 26626:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff8800775aab40 x1644310832370304/t0(0) o37->0148b1a3-d595-4@192.168.123.150@tcp:632/0 lens 448/440 e 0 to 0 dl 1568137897 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
LustreError: 25745:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff8800824fcb40 x1644310832647104/t0(0) o37->0148b1a3-d595-4@192.168.123.150@tcp:640/0 lens 448/440 e 0 to 0 dl 1568137905 ref 1 fl Interpret:/0/0 rc 0/0 job:'getfattr.0'
Lustre: 9860:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568137865/real 1568137865] req@ffff880263b32b40 x1644310832766784/t0(0) o36->lustre-MDT0002-mdc-ffff880280fae800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568137873 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chmod.0'
Lustre: 9860:0:(client.c:2210:ptlrpc_expire_one_request()) Skipped 99 previous similar messages
Lustre: lustre-MDT0002-mdc-ffff880280fae800: Connection to lustre-MDT0002 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 95 previous similar messages
Lustre: lustre-MDT0002: Client 0148b1a3-d595-4 (at 192.168.123.150@tcp) reconnecting
Lustre: Skipped 95 previous similar messages
18[22735]: segfault at 8 ip 00007f0bf4413958 sp 00007fffd472fa50 error 4 in ld-2.17.so[7f0bf4408000+22000]
1[29887]: segfault at 0 ip (null) sp 00007ffc1020c238 error 14 in 1 (deleted)[400000+6000]
LustreError: 10820:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff8802ccd19b40 x1644310834892224/t0(0) o37->47120ec6-bd57-4@192.168.123.150@tcp:710/0 lens 448/440 e 0 to 0 dl 1568137975 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
LustreError: 10820:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff8802f71ae850 x1644310834892224/t0(0) o37->47120ec6-bd57-4@192.168.123.150@tcp:710/0 lens 448/440 e 0 to 0 dl 1568137975 ref 1 fl Interpret:/2/0 rc 0/0 job:'ls.0'
10[11555]: segfault at 0 ip (null) sp 00007ffdfe2b9998 error 14 in 10[400000+6000]
13[16287]: segfault at 8 ip 00007f05ce149958 sp 00007ffd8d5390b0 error 4 in ld-2.17.so[7f05ce13e000+22000]
INFO: task dir_create.sh:25133 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
dir_create.sh D ffff880286148c40 11264 25133 25097 0x00000080
Call Trace:
[<ffffffff817b8229>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817b5e1a>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817b604d>] mutex_lock+0x2d/0x40
[<ffffffff817ad1e5>] lookup_slow+0x33/0xa7
[<ffffffff8124677f>] link_path_walk+0x81f/0x8c0
[<ffffffff8106f93a>] ? __change_page_attr_set_clr+0xcfa/0xea0
[<ffffffff81238a2c>] ? get_empty_filp+0x5c/0x1f0
[<ffffffff812478fe>] path_openat+0xae/0x650
[<ffffffff812492cd>] do_filp_open+0x4d/0xb0
[<ffffffff813fb7bb>] ? do_raw_spin_unlock+0x4b/0x90
[<ffffffff817b99ae>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81256df3>] ? __alloc_fd+0xc3/0x170
[<ffffffff81235167>] do_sys_open+0x137/0x240
[<ffffffff8123528e>] SyS_open+0x1e/0x20
[<ffffffff817c4e15>] system_call_fastpath+0x1c/0x21
INFO: task ls:25899 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff88007afee940 11968 25899 25362 0x00000080
Call Trace:
[<ffffffff817b8229>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817b5e1a>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817b604d>] mutex_lock+0x2d/0x40
[<ffffffff817ad1e5>] lookup_slow+0x33/0xa7
[<ffffffff8124677f>] link_path_walk+0x81f/0x8c0
[<ffffffff81238a2c>] ? get_empty_filp+0x5c/0x1f0
[<ffffffff812478fe>] path_openat+0xae/0x650
[<ffffffff812492cd>] do_filp_open+0x4d/0xb0
[<ffffffff813fb7bb>] ? do_raw_spin_unlock+0x4b/0x90
[<ffffffff817b99ae>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81256df3>] ? __alloc_fd+0xc3/0x170
[<ffffffff81235167>] do_sys_open+0x137/0x240
[<ffffffff812352a4>] SyS_openat+0x14/0x20
[<ffffffff817c4e15>] system_call_fastpath+0x1c/0x21
INFO: task ls:25900 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff88026aff6d40 11968 25900 25362 0x00000080
Call Trace:
[<ffffffff817b8229>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817b5e1a>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817b604d>] mutex_lock+0x2d/0x40
[<ffffffff817ad1e5>] lookup_slow+0x33/0xa7
[<ffffffff8124677f>] link_path_walk+0x81f/0x8c0
[<ffffffff81238a2c>] ? get_empty_filp+0x5c/0x1f0
[<ffffffff812478fe>] path_openat+0xae/0x650
[<ffffffff812492cd>] do_filp_open+0x4d/0xb0
[<ffffffff813fb7bb>] ? do_raw_spin_unlock+0x4b/0x90
[<ffffffff817b99ae>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81256df3>] ? __alloc_fd+0xc3/0x170
[<ffffffff81235167>] do_sys_open+0x137/0x240
[<ffffffff812352a4>] SyS_openat+0x14/0x20
[<ffffffff817c4e15>] system_call_fastpath+0x1c/0x21
INFO: task ls:25902 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff8802bf6b44c0 11720 25902 25362 0x00000080
Call Trace:
[<ffffffff817b8229>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817b5e1a>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817b604d>] mutex_lock+0x2d/0x40
[<ffffffff817ad1e5>] lookup_slow+0x33/0xa7
[<ffffffff8124677f>] link_path_walk+0x81f/0x8c0
[<ffffffff81238a2c>] ? get_empty_filp+0x5c/0x1f0
[<ffffffff812478fe>] path_openat+0xae/0x650
[<ffffffff812492cd>] do_filp_open+0x4d/0xb0
[<ffffffff813fb7bb>] ? do_raw_spin_unlock+0x4b/0x90
[<ffffffff817b99ae>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81256df3>] ? __alloc_fd+0xc3/0x170
[<ffffffff81235167>] do_sys_open+0x137/0x240
[<ffffffff812352a4>] SyS_openat+0x14/0x20
[<ffffffff817c4e15>] system_call_fastpath+0x1c/0x21
INFO: task ls:25907 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff88009c388880 11968 25907 25362 0x00000080
Call Trace:
[<ffffffff817b8229>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817b5e1a>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817b604d>] mutex_lock+0x2d/0x40
[<ffffffff817ad1e5>] lookup_slow+0x33/0xa7
[<ffffffff8124677f>] link_path_walk+0x81f/0x8c0
[<ffffffff81238a2c>] ? get_empty_filp+0x5c/0x1f0
[<ffffffff812478fe>] path_openat+0xae/0x650
[<ffffffff812492cd>] do_filp_open+0x4d/0xb0
[<ffffffff813fb7bb>] ? do_raw_spin_unlock+0x4b/0x90
[<ffffffff817b99ae>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81256df3>] ? __alloc_fd+0xc3/0x170
[<ffffffff81235167>] do_sys_open+0x137/0x240
[<ffffffff812352a4>] SyS_openat+0x14/0x20
[<ffffffff817c4e15>] system_call_fastpath+0x1c/0x21
INFO: task ls:25909 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
ls D ffff88009a0e6900 11968 25909 25362 0x00000080
Call Trace:
[<ffffffff817b8229>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817b5e1a>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff81242780>] ? lookup_fast+0xf0/0x220
[<ffffffff817b604d>] mutex_lock+0x2d/0x40
[<ffffffff81244b4f>] do_last+0x28f/0x1220
[<ffffffff81238a2c>] ? get_empty_filp+0x5c/0x1f0
[<ffffffff8124791d>] path_openat+0xcd/0x650
[<ffffffff812492cd>] do_filp_open+0x4d/0xb0
[<ffffffff813fb7bb>] ? do_raw_spin_unlock+0x4b/0x90
[<ffffffff817b99ae>] ? _raw_spin_unlock+0xe/0x20
[<ffffffff81256df3>] ? __alloc_fd+0xc3/0x170
[<ffffffff81235167>] do_sys_open+0x137/0x240
[<ffffffff812352a4>] SyS_openat+0x14/0x20
[<ffffffff817c4e15>] system_call_fastpath+0x1c/0x21
INFO: task mv:28361 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
mv D ffff88027df1cd00 11744 28361 25317 0x00000080
Call Trace:
[<ffffffff81259b54>] ? mntput+0x24/0x40
[<ffffffff817b8229>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817b5e1a>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817b604d>] mutex_lock+0x2d/0x40
[<ffffffff812428e1>] lock_rename+0x31/0xe0
[<ffffffff81248aff>] SYSC_renameat2+0x22f/0x570
[<ffffffff811e2062>] ? handle_mm_fault+0xc2/0x150
[<ffffffff81249c2e>] SyS_renameat2+0xe/0x10
[<ffffffff81249c6e>] SyS_rename+0x1e/0x20
[<ffffffff817c4e15>] system_call_fastpath+0x1c/0x21
INFO: task mv:31344 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
mv D ffff88025d49e5c0 11680 31344 25355 0x00000080
Call Trace:
[<ffffffff81259b54>] ? mntput+0x24/0x40
[<ffffffff817b8229>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817b5e1a>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817b604d>] mutex_lock+0x2d/0x40
[<ffffffff81242915>] lock_rename+0x65/0xe0
[<ffffffff81248aff>] SYSC_renameat2+0x22f/0x570
[<ffffffff810f362c>] ? ktime_get+0x4c/0xd0
[<ffffffff811e2062>] ? handle_mm_fault+0xc2/0x150
[<ffffffff81249c2e>] SyS_renameat2+0xe/0x10
[<ffffffff81249c6e>] SyS_rename+0x1e/0x20
[<ffffffff817c4e15>] system_call_fastpath+0x1c/0x21
INFO: task mv:715 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
mv D ffff880262c52b80 12088 715 25169 0x00000080
Call Trace:
[<ffffffff81259b54>] ? mntput+0x24/0x40
[<ffffffff817b8229>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817b5e1a>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817b604d>] mutex_lock+0x2d/0x40
[<ffffffff812428e1>] lock_rename+0x31/0xe0
[<ffffffff81248aff>] SYSC_renameat2+0x22f/0x570
[<ffffffff811e2062>] ? handle_mm_fault+0xc2/0x150
[<ffffffff81249c2e>] SyS_renameat2+0xe/0x10
[<ffffffff81249c6e>] SyS_rename+0x1e/0x20
[<ffffffff817c4e15>] system_call_fastpath+0x1c/0x21
INFO: task rm:954 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
rm D ffff8802ce14a800 12336 954 25236 0x00000080
8[15290]: segfault at 8 ip 00007f3e8f692958 sp 00007ffeaba87370 error 4 in ld-2.17.so[7f3e8f687000+22000]
Call Trace:
[<ffffffff8124806f>] ? getname_flags+0x4f/0x1a0
[<ffffffff817b8229>] schedule_preempt_disabled+0x39/0x90
[<ffffffff817b5e1a>] __mutex_lock_slowpath+0x13a/0x340
[<ffffffff817b604d>] mutex_lock+0x2d/0x40
[<ffffffff81248585>] do_rmdir+0x165/0x200
[<ffffffff810b189d>] ? task_work_run+0xcd/0xf0
[<ffffffff812497d5>] SyS_unlinkat+0x25/0x40
[<ffffffff817c4e15>] system_call_fastpath+0x1c/0x21
1[14583]: segfault at 8 ip 00007f03dc608958 sp 00007ffcba9b38e0 error 4 in ld-2.17.so[7f03dc5fd000+22000]
traps: 11[21096] general protection ip:7f91c9d58916 sp:7ffd15ac6e90 error:0 in libc-2.17.so[7f91c9d1a000+1c3000]
14[14744]: segfault at 0 ip (null) sp 00007fff9f541928 error 14 in 14[400000+6000]
Lustre: lustre-MDT0002: Connection restored to 192.168.123.150@tcp (at 192.168.123.150@tcp)
Lustre: Skipped 339 previous similar messages
13[5985]: segfault at 0 ip (null) sp 00007ffe5a176678 error 14 in 13[400000+6000]
1[20695]: segfault at 8 ip 00007f5e832a3958 sp 00007fff17fc1740 error 4 in ld-2.17.so[7f5e83298000+22000]
4[25533]: segfault at 8 ip 00007f39444c19a2 sp 00007ffc8bdc4e80 error 4 in ld-2.17.so[7f39444b6000+22000]
8[26317]: segfault at 8 ip 00007f1792aa4958 sp 00007ffd1f54d7a0 error 4 in ld-2.17.so[7f1792a99000+22000]
Lustre: 8624:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568138347/real 1568138347] req@ffff88031ec37b40 x1644310845065536/t0(0) o36->lustre-MDT0001-mdc-ffff880280fae800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568138393 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chmod.0'
Lustre: 8624:0:(client.c:2210:ptlrpc_expire_one_request()) Skipped 109 previous similar messages
Lustre: lustre-MDT0001-mdc-ffff880280fae800: Connection to lustre-MDT0001 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 104 previous similar messages
Lustre: lustre-MDT0001: Client 0148b1a3-d595-4 (at 192.168.123.150@tcp) reconnecting
Lustre: Skipped 104 previous similar messages
8[7293]: segfault at 0 ip (null) sp 00007ffd66d94e38 error 14 in 8[400000+6000]
9[13163]: segfault at 0 ip (null) sp 00007fff417a76d8 error 14 in 9[400000+6000]
LustreError: 26970:0:(mdt_handler.c:628:mdt_pack_acl2body()) lustre-MDT0002: unable to read [0x280000404:0xe51:0x0] ACL: rc = -2
15[22563]: segfault at 8 ip 00007f605aaa1958 sp 00007ffe36354350 error 4 in ld-2.17.so[7f605aa96000+22000]
0[29245]: segfault at 8 ip 00007f45dc550958 sp 00007fff6f0281e0 error 4 in ld-2.17.so[7f45dc545000+22000]
19[10956]: segfault at 0 ip (null) sp 00007ffd3a70b238 error 14 in 19[400000+6000]
7[14910]: segfault at 8 ip 00007f4d6f912958 sp 00007ffff625a340 error 4 in ld-2.17.so[7f4d6f907000+22000]
LustreError: 27119:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff8801e13dbb40 x1644310852000512/t0(0) o37->47120ec6-bd57-4@192.168.123.150@tcp:708/0 lens 448/440 e 0 to 0 dl 1568138728 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
7[6209]: segfault at 8 ip 00007f6dca0b0958 sp 00007ffcd1f9a670 error 4 in ld-2.17.so[7f6dca0a5000+22000]
0[6083]: segfault at 8 ip 00007f50eecab958 sp 00007fffaa28fea0 error 4 in ld-2.17.so[7f50eeca0000+22000]
LustreError: 10822:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff8802c5d4db40 x1644310853682944/t0(0) o37->47120ec6-bd57-4@192.168.123.150@tcp:39/0 lens 448/440 e 0 to 0 dl 1568138814 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
LustreError: 10822:0:(ldlm_lib.c:3399:target_bulk_io()) Skipped 1 previous similar message
LustreError: 28170:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff8802c0303b40 x1644310854047424/t0(0) o37->0148b1a3-d595-4@192.168.123.150@tcp:57/0 lens 448/440 e 0 to 0 dl 1568138832 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
LustreError: 10635:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff880301ce6b40 x1644310854928128/t0(0) o37->47120ec6-bd57-4@192.168.123.150@tcp:97/0 lens 448/440 e 0 to 0 dl 1568138872 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
LNetError: 2558:0:(peer.c:3713:lnet_peer_ni_add_to_recoveryq_locked()) lpni 192.168.123.150@tcp added to recovery queue. Health = 900
LustreError: 28170:0:(ldlm_lib.c:3414:target_bulk_io()) @@@ truncated bulk READ 0(4096) req@ffff8802611d4b40 x1644310854928128/t0(0) o37->47120ec6-bd57-4@192.168.123.150@tcp:98/0 lens 448/440 e 0 to 0 dl 1568138873 ref 1 fl Interpret:/2/0 rc 0/0 job:'ls.0'
Lustre: lustre-MDT0000: Connection restored to 192.168.123.150@tcp (at 192.168.123.150@tcp)
Lustre: Skipped 299 previous similar messages
LustreError: 10822:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff8802f715c850 x1644310855723776/t0(0) o37->47120ec6-bd57-4@192.168.123.150@tcp:137/0 lens 448/440 e 0 to 0 dl 1568138912 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
LustreError: 2549:0:(events.c:450:server_bulk_callback()) event type 5, status -125, desc ffff880292868c00
7[15585]: segfault at 8 ip 00007f26e5037958 sp 00007ffeb4a34650 error 4 in ld-2.17.so[7f26e502c000+22000]
LustreError: 25745:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff8802921c4b40 x1644310856508480/t0(0) o37->47120ec6-bd57-4@192.168.123.150@tcp:173/0 lens 448/440 e 0 to 0 dl 1568138948 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
LustreError: 25745:0:(ldlm_lib.c:3399:target_bulk_io()) Skipped 1 previous similar message
LustreError: 10820:0:(ldlm_lib.c:3414:target_bulk_io()) @@@ truncated bulk READ 0(4096) req@ffff8800810cbb40 x1644310856508480/t0(0) o37->47120ec6-bd57-4@192.168.123.150@tcp:173/0 lens 448/440 e 0 to 0 dl 1568138948 ref 1 fl Interpret:/2/0 rc 0/0 job:'ls.0'
LNetError: 2558:0:(peer.c:3713:lnet_peer_ni_add_to_recoveryq_locked()) lpni 192.168.123.150@tcp added to recovery queue. Health = 900
LustreError: 10635:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff880261872b40 x1644310857917824/t0(0) o37->0148b1a3-d595-4@192.168.123.150@tcp:250/0 lens 448/440 e 0 to 0 dl 1568139025 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
0[9124]: segfault at 8 ip 00007fd63b738958 sp 00007ffebd0ba090 error 4 in ld-2.17.so[7fd63b72d000+22000]
Lustre: 9612:0:(client.c:2210:ptlrpc_expire_one_request()) @@@ Request sent has timed out for slow reply: [sent 1568138986/real 1568138986] req@ffff8802810a2b40 x1644310858006336/t0(0) o36->lustre-MDT0001-mdc-ffff880280fae800@192.168.123.150@tcp:12/10 lens 488/512 e 0 to 1 dl 1568138995 ref 2 fl Rpc:XQr/0/ffffffff rc 0/-1 job:'chown.0'
Lustre: 9612:0:(client.c:2210:ptlrpc_expire_one_request()) Skipped 202 previous similar messages
Lustre: lustre-MDT0001-mdc-ffff880280fae800: Connection to lustre-MDT0001 (at 192.168.123.150@tcp) was lost; in progress operations using this service will wait for recovery to complete
Lustre: Skipped 193 previous similar messages
Lustre: lustre-MDT0001: Client 0148b1a3-d595-4 (at 192.168.123.150@tcp) reconnecting
Lustre: Skipped 193 previous similar messages
19[26318]: segfault at 0 ip (null) sp 00007ffd466fba28 error 14 in 19[400000+6000]
10[30818]: segfault at 0 ip (null) sp 00007fff5f73ee58 error 14 in 10[400000+6000]
6[6268]: segfault at 8 ip 00007f02f804f958 sp 00007ffe33336270 error 4 in ld-2.17.so[7f02f8044000+22000]
LustreError: 10822:0:(ldlm_lib.c:3399:target_bulk_io()) @@@ Reconnect on bulk READ req@ffff8802f7104850 x1644310860435264/t0(0) o37->0148b1a3-d595-4@192.168.123.150@tcp:383/0 lens 448/440 e 0 to 0 dl 1568139158 ref 1 fl Interpret:/0/0 rc 0/0 job:'ls.0'
LustreError: 10822:0:(ldlm_lib.c:3399:target_bulk_io()) Skipped 1 previous similar message
9[8462]: segfault at 0 ip (null) sp 00007ffdd8f65438 error 14 in 9[400000+6000]
1[14845]: segfault at 0 ip (null) sp 00007ffe0b152c78 error 14 in 1[400000+6000]
17[19401]: segfault at 0 ip (null) sp 00007ffe972d77f8 error 14 in 17[400000+6000]
9[970]: segfault at 0 ip (null) sp 00007ffd0eb0dc08 error 14 in 9 (deleted)[400000+6000]
17[3748]: segfault at 8 ip 00007fda46d52958 sp 00007ffdd16b6030 error 4 in ld-2.17.so[7fda46d47000+22000]
8[12493]: segfault at 0 ip 0000000000403e5f sp 00007fff95b92230 error 6 in 8[400000+6000]
1[18698]: segfault at 8 ip 00007ff1a52e6958 sp 00007ffe63ed1890 error 4 in ld-2.17.so[7ff1a52db000+22000]
Lustre: lustre-MDT0002: Connection restored to 192.168.123.150@tcp (at 192.168.123.150@tcp)
Lustre: Skipped 335 previous similar messages
Externally reported by onyx-68 boilpot email
Return to new crashes list