Match messages in logs (every line would be required to be present in log output Copy from "Messages before crash" column below): | |
Match messages in full crash (every line would be required to be present in crash log output Copy from "Full Crash" column below): | |
Limit to a test: (Copy from below "Failing text"): | |
Delete these reports as invalid (real bug in review or some such) | |
Bug or comment: | |
Extra info: |
Failing Test | Full Crash | Messages before crash | Comment |
---|---|---|---|
racer test 1: racer on clients: centos-0.localnet DURATION=2700 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 10 PID: 837700 Comm: ll_sa_837291 Kdump: loaded Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 e1 cd 87 ff 48 83 05 a9 53 ce 02 01 39 05 67 34 75 01 77 cf 48 83 05 a9 53 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 b4 53 ce 02 01 83 f8 01 74 2b 48 83 05 b7 53 ce 02 RSP: 0018:ffffb49ade583e90 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000080200017 RDX: 0000000080200018 RSI: ffff9b7a0a623e08 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9b7a3a6ba800 R11: 00000000000076b7 R12: ffff9b7a0a623dc0 R13: ffffffffc1bb4cb0 R14: ffff9b7a0a623a88 R15: ffff9b7a0a623e08 FS: 0000000000000000(0000) GS:ffff9b7bf2480000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000322416000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? do_raw_spin_unlock+0x75/0x190 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) lnet(O) dm_flakey libcfs(O) loop zfs(O) spl(O) ec(O) crc32_generic virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: 508192:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff9b7ae0458700 x1840484770512512/t4294967623(0) o101->78c96e07-09f4-4947-9a00-9679625d200b@0@lo:624/0 lens 376/816 e 0 to 0 dl 1755223114 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: lustre-OST0000-osc-ffff9b79f9b43000: disconnect after 20s idle Lustre: lustre-OST0002-osc-ffff9b79f9b43000: disconnect after 24s idle Lustre: Skipped 1 previous similar message Lustre: mdt00_007: service thread pid 508192 was inactive for 42.111 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt00_003 state:I task:mdt00_012 state:I stack:0 pid:508771 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] Lustre: mdt00_000: service thread pid 505943 was inactive for 42.192 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_try+0xae/0x310 [mdt] mdt_getattr_name_lock+0x2249/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] Lustre: Skipped 2 previous similar messages ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] stack:0 pid:507045 ppid:2 flags:0x80004080 ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 task:mdt00_007 state:I stack:0 pid:508192 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 Call Trace: schedule+0xc0/0x180 __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_try+0xae/0x310 [mdt] mdt_getattr_name_lock+0x2249/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_try+0xae/0x310 [mdt] mdt_getattr_name_lock+0x2249/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 505933:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9b7a0c347400/0x3980273732a53df0 lrc: 3/0,0 mode: PR/PR res: [0x200000402:0x1d:0x0].0x0 bits 0x13/0x0 rrc: 18 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x3980273732a53dd4 expref: 38 pid: 507165 timeout: 3024 lvb_type: 0 Lustre: mdt00_006: service thread pid 508187 completed after 103.941s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_003: service thread pid 507045 completed after 103.570s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_002: service thread pid 505945 completed after 103.845s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_001: service thread pid 505944 completed after 103.162s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_012: service thread pid 508771 completed after 103.839s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_004: service thread pid 507165 completed after 103.856s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_005: service thread pid 508111 completed after 103.930s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_000: service thread pid 505943 completed after 103.644s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_007: service thread pid 508192 completed after 103.575s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 505929:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1755223207 with bad export cookie 4143354775207195638 LustreError: lustre-MDT0000-mdc-ffff9b79f9b43000: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9b79f9b43000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff9b79f9b43000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 508657:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 508890:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000402:0x1d:0x0] error: rc = -5 LustreError: 509302:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9b79f9b43000: inode [0x200000402:0x14:0x0] mdc close failed: rc = -108 Lustre: lustre-MDT0000-mdc-ffff9b79f9b43000: Connection restored to (at 0@lo) Lustre: 506220:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 506220:0:(osd_handler.c:1962:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 506220:0:(osd_handler.c:1969:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 506220:0:(osd_handler.c:1979:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 506220:0:(osd_handler.c:1986:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 506220:0:(osd_handler.c:1993:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 506222:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 506222:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 506222:0:(osd_handler.c:1962:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 506222:0:(osd_handler.c:1962:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 506222:0:(osd_handler.c:1969:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 506222:0:(osd_handler.c:1969:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 506222:0:(osd_handler.c:1979:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 506222:0:(osd_handler.c:1979:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 506222:0:(osd_handler.c:1986:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 506222:0:(osd_handler.c:1986:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 506222:0:(osd_handler.c:1993:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 506222:0:(osd_handler.c:1993:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 508551:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 528, rollback = 7 Lustre: 508551:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 3 previous similar messages Lustre: 508551:0:(osd_handler.c:1962:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 508551:0:(osd_handler.c:1962:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 508551:0:(osd_handler.c:1969:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 508551:0:(osd_handler.c:1969:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 508551:0:(osd_handler.c:1979:osd_trans_dump_creds()) write: 2/528/0, punch: 0/0/0, quota 4/150/0 Lustre: 508551:0:(osd_handler.c:1979:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 508551:0:(osd_handler.c:1986:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 508551:0:(osd_handler.c:1986:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 508551:0:(osd_handler.c:1993:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 508551:0:(osd_handler.c:1993:osd_trans_dump_creds()) Skipped 3 previous similar messages 7[516090]: segfault at 8 ip 00007fe1076c4875 sp 00007ffc37a1ddd0 error 4 in ld-2.28.so[7fe1076a3000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 509945:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 509945:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 509945:0:(osd_handler.c:1962:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 509945:0:(osd_handler.c:1962:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 509945:0:(osd_handler.c:1969:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 509945:0:(osd_handler.c:1969:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 509945:0:(osd_handler.c:1979:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 509945:0:(osd_handler.c:1979:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 509945:0:(osd_handler.c:1986:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 509945:0:(osd_handler.c:1986:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 509945:0:(osd_handler.c:1993:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 509945:0:(osd_handler.c:1993:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 508551:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 508551:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 508551:0:(osd_handler.c:1962:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 508551:0:(osd_handler.c:1962:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 508551:0:(osd_handler.c:1969:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 508551:0:(osd_handler.c:1969:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 508551:0:(osd_handler.c:1979:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 508551:0:(osd_handler.c:1979:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 508551:0:(osd_handler.c:1986:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 508551:0:(osd_handler.c:1986:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 508551:0:(osd_handler.c:1993:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 508551:0:(osd_handler.c:1993:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 508187:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x2d0:0x0] with magic=0xbd60bd0 18[519673]: segfault at 56439a221000 ip 000056439a221000 sp 00007ffda1314cf8 error 14 in 18[56439a421000+1000] Code: Unable to access opcode bytes at RIP 0x56439a220fd6. Lustre: 509945:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 509945:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 7 previous similar messages Lustre: 509945:0:(osd_handler.c:1962:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 509945:0:(osd_handler.c:1962:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 509945:0:(osd_handler.c:1969:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 509945:0:(osd_handler.c:1969:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 509945:0:(osd_handler.c:1979:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 509945:0:(osd_handler.c:1979:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 509945:0:(osd_handler.c:1986:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 509945:0:(osd_handler.c:1986:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 509945:0:(osd_handler.c:1993:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 509945:0:(osd_handler.c:1993:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: lustre-OST0003-osc-ffff9b79f9b43000: disconnect after 21s idle LustreError: 505933:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9b7a466f0400/0x3980273732b22492 lrc: 3/0,0 mode: PR/PR res: [0x200000403:0x613:0x0].0x0 bits 0x13/0x0 rrc: 7 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x3980273732b2244c expref: 193 pid: 505944 timeout: 3170 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff9b79f9b43000: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9b79f9b43000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff9b79f9b43000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 527553:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9b79f9b43000: inode [0x200000403:0x5de:0x0] mdc close failed: rc = -108 LustreError: 527431:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff9b79f9b43000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 527431:0:(mdc_request.c:1477:mdc_read_page()) Skipped 13 previous similar messages Lustre: lustre-MDT0000-mdc-ffff9b79f9b43000: Connection restored to (at 0@lo) Lustre: 506222:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 506222:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 11 previous similar messages Lustre: 506222:0:(osd_handler.c:1962:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 506222:0:(osd_handler.c:1962:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 506222:0:(osd_handler.c:1969:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 506222:0:(osd_handler.c:1969:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 506222:0:(osd_handler.c:1979:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 506222:0:(osd_handler.c:1979:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 506222:0:(osd_handler.c:1986:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 506222:0:(osd_handler.c:1986:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 506222:0:(osd_handler.c:1993:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 506222:0:(osd_handler.c:1993:osd_trans_dump_creds()) Skipped 11 previous similar messages LustreError: 505933:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9b7ad104f200/0x3980273732b6e201 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x242:0x0].0x0 bits 0x13/0x0 rrc: 7 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x3980273732b6e1de expref: 341 pid: 514724 timeout: 3288 lvb_type: 0 LustreError: 505943:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 000000000f2a3e05 ns: mdt-lustre-MDT0000_UUID lock: ffff9b7a24e91000/0x3980273732b6e8e5 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x242:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x3980273732b6e8ad expref: 133 pid: 505943 timeout: 0 lvb_type: 0 LustreError: 508221:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1755223469 with bad export cookie 4143354775207195834 LustreError: lustre-MDT0000-mdc-ffff9b79e024b000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9b79e024b000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 2 previous similar messages LustreError: lustre-MDT0000-mdc-ffff9b79e024b000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 534653:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9b79e024b000: inode [0x200000402:0x821:0x0] mdc close failed: rc = -108 LustreError: 534653:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message LustreError: 534579:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff9b79e024b000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 534579:0:(mdc_request.c:1477:mdc_read_page()) Skipped 24 previous similar messages LustreError: 534247:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000404:0x242:0x0] error -108. LustreError: 534713:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 534713:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff9b79e024b000: Connection restored to (at 0@lo) Lustre: 506220:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 506220:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 9 previous similar messages Lustre: 506220:0:(osd_handler.c:1962:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 506220:0:(osd_handler.c:1962:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 506220:0:(osd_handler.c:1969:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 506220:0:(osd_handler.c:1969:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 506220:0:(osd_handler.c:1979:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 506220:0:(osd_handler.c:1979:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 506220:0:(osd_handler.c:1986:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 506220:0:(osd_handler.c:1986:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 506220:0:(osd_handler.c:1993:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 506220:0:(osd_handler.c:1993:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 512211:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0x2bf:0x0] with magic=0xbd60bd0 Lustre: 512211:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: 505943:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x7b0:0x0] with magic=0xbd60bd0 Lustre: 505943:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 10[553943]: segfault at 8 ip 00007f1b739c6875 sp 00007ffe678892e0 error 4 in ld-2.28.so[7f1b739a5000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 18[554261]: segfault at 8 ip 00007f0be2b14875 sp 00007ffc65b1e460 error 4 in ld-2.28.so[7f0be2af3000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 9[554834]: segfault at 8 ip 00007f2cb302a875 sp 00007fffdede8090 error 4 in ld-2.28.so[7f2cb3009000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 18[559360]: segfault at 8 ip 00007f1710e2f875 sp 00007ffd259599a0 error 4 in ld-2.28.so[7f1710e0e000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 509945:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 509945:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 99 previous similar messages Lustre: 509945:0:(osd_handler.c:1962:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 509945:0:(osd_handler.c:1962:osd_trans_dump_creds()) Skipped 99 previous similar messages Lustre: 509945:0:(osd_handler.c:1969:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 509945:0:(osd_handler.c:1969:osd_trans_dump_creds()) Skipped 99 previous similar messages Lustre: 509945:0:(osd_handler.c:1979:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 509945:0:(osd_handler.c:1979:osd_trans_dump_creds()) Skipped 99 previous similar messages Lustre: 509945:0:(osd_handler.c:1986:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 509945:0:(osd_handler.c:1986:osd_trans_dump_creds()) Skipped 99 previous similar messages Lustre: 509945:0:(osd_handler.c:1993:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 509945:0:(osd_handler.c:1993:osd_trans_dump_creds()) Skipped 99 previous similar messages Lustre: 514724:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0xb84:0x0] with magic=0xbd60bd0 Lustre: 514724:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: 510905:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0xabe:0x0] with magic=0xbd60bd0 Lustre: 510905:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 4[569557]: segfault at 8 ip 00007f26e52b6875 sp 00007ffc7da02870 error 4 in ld-2.28.so[7f26e5295000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 18[572513]: segfault at 8 ip 00007fb45e465875 sp 00007ffeaa19b4e0 error 4 in ld-2.28.so[7fb45e444000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 9[576398]: segfault at 0 ip 000056138f6f3b47 sp 00007fffbf6a6f00 error 6 in 12[56138f6ef000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 512211:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x10a3:0x0] with magic=0xbd60bd0 Lustre: 512211:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 17[578642]: segfault at 0 ip 0000562838c50b47 sp 00007fff866b0d00 error 6 in 17[562838c4c000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 traps: 2[581601] trap invalid opcode ip:55b3816a481a sp:7ffd07957a98 error:0 in 2[55b38169f000+7000] 15[587384]: segfault at 8 ip 00007fa5d2fb3875 sp 00007ffcd0e36bc0 error 4 in ld-2.28.so[7fa5d2f92000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 12[591578]: segfault at 0 ip 000056320b119b47 sp 00007ffe2ebc3940 error 6 in 12[56320b115000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 12[596152]: segfault at 8 ip 00007f4eeb1aa875 sp 00007ffe02a90eb0 error 4 in ld-2.28.so[7f4eeb189000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 6[596148]: segfault at 8 ip 00007fec9b68c875 sp 00007ffc1b2b9da0 error 4 in ld-2.28.so[7fec9b66b000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 6[598672]: segfault at 8 ip 00007ff53de6a875 sp 00007ffc454220c0 error 4 in ld-2.28.so[7ff53de49000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 590375:0:(statahead.c:1600:ll_statahead_thread()) lustre: ll_sa_589362 LIST => FNAME no wakeup. 1[599930]: segfault at 0 ip 00005610cf066b47 sp 00007fff59041e90 error 6 in 1[5610cf062000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 557241:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 557241:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 109 previous similar messages Lustre: 557241:0:(osd_handler.c:1962:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 557241:0:(osd_handler.c:1962:osd_trans_dump_creds()) Skipped 109 previous similar messages Lustre: 557241:0:(osd_handler.c:1969:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 557241:0:(osd_handler.c:1969:osd_trans_dump_creds()) Skipped 109 previous similar messages Lustre: 557241:0:(osd_handler.c:1979:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 557241:0:(osd_handler.c:1979:osd_trans_dump_creds()) Skipped 109 previous similar messages Lustre: 557241:0:(osd_handler.c:1986:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 557241:0:(osd_handler.c:1986:osd_trans_dump_creds()) Skipped 109 previous similar messages Lustre: 557241:0:(osd_handler.c:1993:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 557241:0:(osd_handler.c:1993:osd_trans_dump_creds()) Skipped 109 previous similar messages Lustre: 510908:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0x191d:0x0] with magic=0xbd60bd0 Lustre: 510908:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 18[619303]: segfault at 8 ip 00007f7b72eab875 sp 00007fff1fd170b0 error 4 in ld-2.28.so[7f7b72e8a000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 9[632230]: segfault at 8 ip 00007f9188ab5875 sp 00007ffd25104450 error 4 in ld-2.28.so[7f9188a94000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 6[634678]: segfault at 8 ip 00007f8b3d9cb875 sp 00007ffd23212ef0 error 4 in ld-2.28.so[7f8b3d9aa000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 508187:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0x24a5:0x0] with magic=0xbd60bd0 Lustre: 508187:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 19[649566]: segfault at 8 ip 00007fc047d27875 sp 00007ffdd3286970 error 4 in ld-2.28.so[7fc047d06000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 15[650449]: segfault at 8 ip 00007fe5b067c875 sp 00007ffd4eb609a0 error 4 in ld-2.28.so[7fe5b065b000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 19[652626]: segfault at 8 ip 00007ff9de192875 sp 00007ffe895c0a80 error 4 in ld-2.28.so[7ff9de171000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 traps: 11[655063] general protection fault ip:55a7a0482fb6 sp:7ffdf94f9818 error:0 in 11[55a7a047e000+7000] LustreError: 505933:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9b7a71e14000/0x39802737330ebdf6 lrc: 3/0,0 mode: PR/PR res: [0x200000405:0x2ae5:0x0].0x0 bits 0x13/0x0 rrc: 17 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x39802737330ebde1 expref: 1297 pid: 508192 timeout: 3758 lvb_type: 0 LustreError: 507045:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 000000003c0ff4a6 ns: mdt-lustre-MDT0000_UUID lock: ffff9b7a5cb5b400/0x39802737330ebec1 lrc: 4/0,0 mode: PR/PR res: [0x200000405:0x2ae5:0x0].0x0 bits 0x13/0x0 rrc: 16 type: IBT gid 0 flags: 0x50200400000020 nid: 0@lo remote: 0x39802737330ebe74 expref: 215 pid: 507045 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff9b79f9b43000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9b79f9b43000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 508221:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1755223940 with bad export cookie 4143354775208081334 LustreError: lustre-MDT0000-mdc-ffff9b79f9b43000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 662457:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9b79f9b43000: inode [0x200000404:0x2d1c:0x0] mdc close failed: rc = -108 LustreError: 662457:0:(file.c:248:ll_close_inode_openhandle()) Skipped 6 previous similar messages LustreError: 662367:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff9b79f9b43000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 662367:0:(mdc_request.c:1477:mdc_read_page()) Skipped 4 previous similar messages LustreError: 662251:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 662457:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff9b79f9b43000: namespace resource [0x200000401:0x1:0x0].0x0 (ffff9b7a25250200) refcount nonzero (15) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff9b79f9b43000: Connection restored to (at 0@lo) Lustre: 509945:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 515 < left 618, rollback = 7 Lustre: 509945:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 117 previous similar messages Lustre: 509945:0:(osd_handler.c:1962:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 509945:0:(osd_handler.c:1962:osd_trans_dump_creds()) Skipped 117 previous similar messages Lustre: 509945:0:(osd_handler.c:1969:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 509945:0:(osd_handler.c:1969:osd_trans_dump_creds()) Skipped 117 previous similar messages Lustre: 509945:0:(osd_handler.c:1979:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 509945:0:(osd_handler.c:1979:osd_trans_dump_creds()) Skipped 117 previous similar messages Lustre: 509945:0:(osd_handler.c:1986:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 509945:0:(osd_handler.c:1986:osd_trans_dump_creds()) Skipped 117 previous similar messages Lustre: 509945:0:(osd_handler.c:1993:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 509945:0:(osd_handler.c:1993:osd_trans_dump_creds()) Skipped 117 previous similar messages traps: 7[664516] trap invalid opcode ip:55dd8acbf92a sp:7fff734daff8 error:0 in 7[55dd8acba000+7000] 9[664399]: segfault at 0 ip 0000556f13320b47 sp 00007ffca9809d40 error 6 in 9[556f1331c000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 18[671183]: segfault at 8 ip 00007fb07c806875 sp 00007ffcd1b923b0 error 4 in ld-2.28.so[7fb07c7e5000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 505933:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 104s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9b7ae11c6c00/0x3980273733163654 lrc: 3/0,0 mode: PR/PR res: [0x200000406:0x36d:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x39802737331635eb expref: 179 pid: 510905 timeout: 3892 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff9b79f9b43000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9b79f9b43000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff9b79f9b43000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 672757:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000406:0x36d:0x0] error -5. LustreError: 672733:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000406:0x36d:0x0] error: rc = -5 LustreError: 672924:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9b79f9b43000: inode [0x200000405:0x2d5b:0x0] mdc close failed: rc = -108 LustreError: 672924:0:(file.c:248:ll_close_inode_openhandle()) Skipped 7 previous similar messages LustreError: 672733:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 16 previous similar messages Lustre: lustre-MDT0000-mdc-ffff9b79f9b43000: Connection restored to (at 0@lo) 11[675847]: segfault at 8 ip 00007f4348e4f875 sp 00007ffe990897e0 error 4 in ld-2.28.so[7f4348e2e000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 675847:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9b79f9b43000: inode [0x200000405:0x2f69:0x0] mdc close failed: rc = -13 LustreError: 675847:0:(file.c:248:ll_close_inode_openhandle()) Skipped 14 previous similar messages Lustre: 507045:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0x2f43:0x0] with magic=0xbd60bd0 Lustre: 507045:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages LustreError: 679226:0:(statahead.c:2457:start_statahead_thread()) lustre: unsupported statahead pattern 0X0. 12[684320]: segfault at 8 ip 00007f29b0be1875 sp 00007fffc0ed8880 error 4 in ld-2.28.so[7f29b0bc0000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 0[690132]: segfault at 8 ip 00007f0fb21d4875 sp 00007ffe547d9b40 error 4 in ld-2.28.so[7f0fb21b3000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 0[695800]: segfault at 8 ip 00007f76e9708875 sp 00007ffde6a86270 error 4 in ld-2.28.so[7f76e96e7000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: lustre-OST0001-osc-ffff9b79f9b43000: disconnect after 23s idle Lustre: lustre-OST0000-osc-ffff9b79e024b000: disconnect after 22s idle LustreError: 505933:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9b7a668b6000/0x398027373328cff1 lrc: 3/0,0 mode: PR/PR res: [0x200000405:0x3723:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x398027373328cfb2 expref: 1000 pid: 509266 timeout: 4096 lvb_type: 0 LustreError: 510736:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1755224276 with bad export cookie 4143354775208392435 LustreError: lustre-MDT0000-mdc-ffff9b79e024b000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9b79e024b000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff9b79e024b000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 699430:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000405:0x3723:0x0] error -108. LustreError: 699548:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9b79e024b000: inode [0x200000405:0x3723:0x0] mdc close failed: rc = -108 LustreError: 699430:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message LustreError: 699548:0:(file.c:248:ll_close_inode_openhandle()) Skipped 3 previous similar messages LustreError: 699726:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 699723:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff9b79e024b000: namespace resource [0x200000401:0x1:0x0].0x0 (ffff9b7a45dfc300) refcount nonzero (1) after lock cleanup; forcing cleanup. LustreError: 699723:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff9b79e024b000: Connection restored to (at 0@lo) 18[707695]: segfault at 555f439e5000 ip 0000555f439e5000 sp 00007ffd8521c628 error 14 in 18[555f43be5000+1000] Code: Unable to access opcode bytes at RIP 0x555f439e4fd6. 8[710332]: segfault at 0 ip 000055aa40db9b47 sp 00007ffdb50c9e00 error 6 in 8[55aa40db5000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 16[713755]: segfault at 8 ip 00007f0d22fc9875 sp 00007ffeb12f0ac0 error 4 in ld-2.28.so[7f0d22fa8000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 505943:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000408:0x541:0x0] with magic=0xbd60bd0 Lustre: 505943:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages 13[726142]: segfault at 8 ip 00007f142cf5a875 sp 00007fff5f705e00 error 4 in ld-2.28.so[7f142cf39000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 8[726876]: segfault at 8 ip 00007f1d772d4875 sp 00007fff6c3b4450 error 4 in ld-2.28.so[7f1d772b3000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 ptlrpc_watchdog_fire: 6 callbacks suppressed Lustre: mdt_out00_001: service thread pid 505949 was inactive for 40.959 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt_out00_001 state:I stack:0 pid:505949 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_pdo_lock+0x409/0x910 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_parent_lock+0x8f/0x370 [mdt] ? mdt_name_unpack+0xc6/0x140 [mdt] ? lu_name_is_valid_len+0x5e/0x80 [mdt] mdt_getattr_name_lock+0x278a/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] ? lustre_msg_buf+0x1b/0x70 [ptlrpc] ? __req_capsule_get+0x44e/0xa50 [ptlrpc] ? lustre_swab_ldlm_lock_desc+0x90/0x90 [ptlrpc] mdt_batch_getattr+0xf6/0x1f0 [mdt] mdt_batch+0x7ee/0x20a9 [mdt] ? lustre_msg_get_tag+0x20/0x110 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 505933:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 104s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9b7a24f68c00/0x3980273733410f62 lrc: 3/0,0 mode: PR/PR res: [0x200000407:0x13e5:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x3980273733410def expref: 756 pid: 508209 timeout: 4310 lvb_type: 0 LustreError: 509266:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 000000008938836e ns: mdt-lustre-MDT0000_UUID lock: ffff9b7a82950400/0x3980273733411535 lrc: 3/0,0 mode: PR/PR res: [0x200000408:0xb30:0x0].0x0 bits 0x13/0x0 rrc: 23 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x3980273733411527 expref: 420 pid: 509266 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff9b79f9b43000: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9b79f9b43000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 507036:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1755224493 with bad export cookie 4143354775214641181 LustreError: Skipped 3 previous similar messages Lustre: mdt_out00_001: service thread pid 505949 completed after 73.759s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffff9b79f9b43000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 734448:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x1:0x0] error: rc = -108 LustreError: 734448:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 9 previous similar messages LustreError: 733710:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff9b79f9b43000: [0x200000408:0xb30:0x0] lock enqueue fails: rc = -108 LustreError: 734474:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9b79f9b43000: inode [0x200000408:0xb58:0x0] mdc close failed: rc = -108 LustreError: 733710:0:(mdc_request.c:1477:mdc_read_page()) Skipped 24 previous similar messages LustreError: 734158:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000407:0x13e5:0x0] error -108. LustreError: 734474:0:(file.c:248:ll_close_inode_openhandle()) Skipped 5 previous similar messages Lustre: lustre-MDT0000-mdc-ffff9b79f9b43000: Connection restored to (at 0@lo) Lustre: 506222:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 564, rollback = 7 Lustre: 506222:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 185 previous similar messages Lustre: 506222:0:(osd_handler.c:1962:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 506222:0:(osd_handler.c:1962:osd_trans_dump_creds()) Skipped 185 previous similar messages Lustre: 506222:0:(osd_handler.c:1969:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 506222:0:(osd_handler.c:1969:osd_trans_dump_creds()) Skipped 185 previous similar messages Lustre: 506222:0:(osd_handler.c:1979:osd_trans_dump_creds()) write: 2/564/0, punch: 0/0/0, quota 4/150/0 Lustre: 506222:0:(osd_handler.c:1979:osd_trans_dump_creds()) Skipped 185 previous similar messages Lustre: 506222:0:(osd_handler.c:1986:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 506222:0:(osd_handler.c:1986:osd_trans_dump_creds()) Skipped 185 previous similar messages Lustre: 506222:0:(osd_handler.c:1993:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 506222:0:(osd_handler.c:1993:osd_trans_dump_creds()) Skipped 185 previous similar messages 3[745521]: segfault at 8 ip 00007f29714c7875 sp 00007ffe8731ffc0 error 4 in ld-2.28.so[7f29714a6000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 2[747529]: segfault at 8 ip 00007f91535e5875 sp 00007ffeab231c80 error 4 in ld-2.28.so[7f91535c4000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 19[751429]: segfault at 8 ip 00007fc3347a9875 sp 00007fff1403e470 error 4 in ld-2.28.so[7fc334788000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 12[758144]: segfault at 8 ip 00007f3a0fb97875 sp 00007ffda8c49310 error 4 in ld-2.28.so[7f3a0fb76000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 758946:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9b79e024b000: inode [0x200000409:0x81b:0x0] mdc close failed: rc = -13 LustreError: 758946:0:(file.c:248:ll_close_inode_openhandle()) Skipped 8 previous similar messages 12[766152]: segfault at 8 ip 00007f799cdbd875 sp 00007fffead9d0a0 error 4 in ld-2.28.so[7f799cd9c000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 0[768689]: segfault at 8 ip 00007fa18f767875 sp 00007fff9bfd3880 error 4 in ld-2.28.so[7fa18f746000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 510905:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000408:0x2136:0x0] with magic=0xbd60bd0 Lustre: 510905:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 7 previous similar messages 5[798767]: segfault at 8 ip 00007f5713d12875 sp 00007ffffbe33cb0 error 4 in ld-2.28.so[7f5713cf1000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 17[825845]: segfault at 0 ip 0000564b65066da4 sp 00007fff23e11e80 error 4 in 17[564b65065000+7000] Code: 89 44 24 48 48 89 44 24 58 48 89 e0 66 2e 0f 1f 84 00 00 00 00 00 48 83 c0 10 48 8b 38 48 85 ff 74 14 b9 06 00 00 00 48 89 ee <f3> a6 0f 97 c2 80 da 00 84 d2 75 e0 4c 8b 60 08 ba 05 00 00 00 48 traps: 3[829119] general protection fault ip:55c431a8cfdc sp:7ffe81397f78 error:0 in 3[55c431a88000+7000] 2[830756]: segfault at 8 ip 00007fe446b86875 sp 00007ffdd15ba1c0 error 4 in ld-2.28.so[7fe446b65000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 | Link to test |
racer test 1: racer on clients: oleg207-client.virtnet DURATION=3600 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 3 PID: 266479 Comm: ll_sa_266392 Kdump: loaded Tainted: G O -------- - - 4.18.0rh8.10-debug #2 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-1.fc38 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 31 b5 87 ff 48 83 05 e9 67 ce 02 01 39 05 e7 fe 74 01 77 cf 48 83 05 e9 67 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 f4 67 ce 02 01 83 f8 01 74 2b 48 83 05 f7 67 ce 02 RSP: 0018:ffffbbd908f3fe90 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000006 RDX: ffff91f3c21b8160 RSI: ffff91f28b2dd6c8 RDI: 0000000000000008 RBP: 0000000000000008 R08: 000000000000000f R09: 0000000000000005 R10: ffff91f3b2869400 R11: ffffffff9bed5de8 R12: ffff91f28b2dd680 R13: ffff91f3b28694b8 R14: ffff91f28b2dd348 R15: ffff91f28b2dd6c8 FS: 0000000000000000(0000) GS:ffff91f3c2180000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000022216005 CR4: 0000000000170ee0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? thread_group_exited+0x90/0x90 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) ec(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) lnet(O) libcfs(O) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver intel_rapl_msr intel_rapl_common sb_edac rapl i2c_piix4 pcspkr squashfs crct10dif_pclmul crc32_pclmul ata_generic crc32c_intel ata_piix serio_raw ghash_clmulni_intel libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CR2: 0000000000000008 | 16[13715]: segfault at 8 ip 00007f6e90b4c875 sp 00007ffe2154dc90 error 4 in ld-2.28.so[7f6e90b2b000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 13780:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff91f3a0065000: cannot apply new layout on [0x240000403:0x66:0x0] : rc = -5 LustreError: 13780:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000403:0x66:0x0] error -5. 6[14163]: segfault at 0 ip 0000562481881b47 sp 00007ffdce1ee150 error 6 in 6[56248187d000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: dir [0x240000403:0x48:0x0] stripe 1 readdir failed: -2, directory is partially accessed! LustreError: 11976:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff91f3a099a000: dir page locate: [0x200000403:0x1c:0x0] at 0: rc -5 15[15252]: segfault at 8 ip 00007f3ab4dcd875 sp 00007fff3dd9e2c0 error 4 in ld-2.28.so[7f3ab4dac000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 11009:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000403:0x13d:0x0]: rc = -5 LustreError: 11009:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 10962:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000403:0x253:0x0]: rc = -5 LustreError: 10962:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 10962:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000403:0x253:0x0]: rc = -5 LustreError: 10962:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 10962:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000403:0x253:0x0]: rc = -5 LustreError: 10962:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 1 previous similar message LustreError: 10962:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 10962:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 1 previous similar message Lustre: dir [0x200000402:0x4de:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 5 previous similar messages Lustre: dir [0x240000402:0x478:0x0] stripe 2 readdir failed: -2, directory is partially accessed! LustreError: 25995:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff91f3a0065000: dir page locate: [0x200000400:0x18:0x0] at 0: rc -5 LustreError: 25995:0:(mdc_request.c:1492:mdc_read_page()) Skipped 4 previous similar messages 3[26386]: segfault at 8 ip 00007efd0517e875 sp 00007ffc82330050 error 4 in ld-2.28.so[7efd0515d000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 26785:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0001-mdc-ffff91f3a0065000: dir page locate: [0x240000401:0x15:0x0] at 0: rc -5 LustreError: 26785:0:(mdc_request.c:1492:mdc_read_page()) Skipped 1 previous similar message Lustre: dir [0x200000402:0x644:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 14 previous similar messages LustreError: 27649:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0001-mdc-ffff91f3a0065000: dir page locate: [0x240000401:0x15:0x0] at 0: rc -5 LustreError: 27649:0:(mdc_request.c:1492:mdc_read_page()) Skipped 11 previous similar messages LustreError: 27963:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000402:0x3e6:0x0]: rc = -5 LustreError: 27963:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 3 previous similar messages LustreError: 27963:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 27963:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 3 previous similar messages LustreError: 28191:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff91f3a0065000: inode [0x240000402:0x3ee:0x0] mdc close failed: rc = -2 LustreError: 416:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 14 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 28173:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff91f3a0065000: cannot apply new layout on [0x240000402:0x3e6:0x0] : rc = -5 LustreError: 28173:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000402:0x3e6:0x0] error -5. Lustre: dir [0x200000402:0x644:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 48 previous similar messages LustreError: 1387:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 19 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 29924:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff91f3a0065000: cannot apply new layout on [0x200000402:0x722:0x0] : rc = -5 LustreError: 29924:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000402:0x722:0x0] error -5. LustreError: 31090:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff91f3a0065000: cannot apply new layout on [0x200000402:0x722:0x0] : rc = -5 LustreError: 31589:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000402:0x722:0x0]: rc = -5 LustreError: 31589:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 9 previous similar messages LustreError: 31589:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 31589:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 9 previous similar messages LustreError: 31493:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0001-mdc-ffff91f3a0065000: dir page locate: [0x240000402:0x33b:0x0] at 0: rc -5 LustreError: 31493:0:(mdc_request.c:1492:mdc_read_page()) Skipped 28 previous similar messages LustreError: 31845:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000402:0x722:0x0] error -5. LustreError: 32724:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff91f3a0065000: cannot apply new layout on [0x200000402:0x722:0x0] : rc = -5 LustreError: 32724:0:(lov_object.c:1350:lov_layout_change()) Skipped 8 previous similar messages LustreError: 416:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 14 [0x240000402:0x3e6:0x0] inode@0000000000000000: rc = -5 LustreError: lustre-MDT0000-mdc-ffff91f3a099a000: operation ldlm_enqueue to node 192.168.202.107@tcp failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff91f3a099a000: Connection to lustre-MDT0000 (at 192.168.202.107@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 2 previous similar messages LustreError: lustre-MDT0000-mdc-ffff91f3a099a000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 17864:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff91f3a099a000: inode [0x200000403:0x119:0x0] mdc close failed: rc = -5 LustreError: 20856:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000403:0x119:0x0] error: rc = -5 LustreError: 17688:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 20856:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 3 previous similar messages Lustre: lustre-MDT0000-mdc-ffff91f3a099a000: Connection restored to (at 192.168.202.107@tcp) Lustre: dir [0x240000403:0x314:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 12 previous similar messages LustreError: lustre-MDT0001-mdc-ffff91f3a0065000: operation ldlm_enqueue to node 192.168.202.107@tcp failed: rc = -107 Lustre: lustre-MDT0001-mdc-ffff91f3a0065000: Connection to lustre-MDT0001 (at 192.168.202.107@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 6 previous similar messages LustreError: lustre-MDT0001-mdc-ffff91f3a0065000: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. LustreError: 33071:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x240000402:0x7d6:0x0] error: rc = -5 LustreError: 33071:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 40 previous similar messages LustreError: 32990:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 32990:0:(llite_lib.c:2039:ll_md_setattr()) Skipped 5 previous similar messages LustreError: 23648:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff91f3a0065000: inode [0x240000402:0x31e:0x0] mdc close failed: rc = -5 LustreError: 23648:0:(file.c:248:ll_close_inode_openhandle()) Skipped 19 previous similar messages LustreError: 23640:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000402:0x31e:0x0] error -108. LustreError: 23640:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message LustreError: 23848:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0001-mdc-ffff91f3a0065000: [0x240000402:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 23848:0:(mdc_request.c:1477:mdc_read_page()) Skipped 1 previous similar message Lustre: lustre-MDT0001-mdc-ffff91f3a0065000: Connection restored to (at 192.168.202.107@tcp) LustreError: 38790:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff91f3a099a000: inode [0x240000403:0x483:0x0] mdc close failed: rc = -2 LustreError: 38790:0:(file.c:248:ll_close_inode_openhandle()) Skipped 10 previous similar messages LustreError: 41359:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x497:0x0]: rc = -5 LustreError: 41359:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 8 previous similar messages LustreError: 41359:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 41359:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 8 previous similar messages Lustre: dir [0x200000404:0x678:0x0] stripe 2 readdir failed: -2, directory is partially accessed! LustreError: 42137:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff91f3a099a000: dir page locate: [0x200000404:0x27c:0x0] at 0: rc -5 Lustre: Skipped 2 previous similar messages LustreError: lustre-MDT0000-mdc-ffff91f3a0065000: operation ldlm_enqueue to node 192.168.202.107@tcp failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff91f3a0065000: Connection to lustre-MDT0000 (at 192.168.202.107@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 3 previous similar messages LustreError: lustre-MDT0000-mdc-ffff91f3a0065000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 33225:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000402:0xb68:0x0] error: rc = -5 LustreError: 11428:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 33225:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 9 previous similar messages LustreError: 11428:0:(llite_lib.c:2039:ll_md_setattr()) Skipped 1 previous similar message LustreError: 17411:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff91f3a0065000: inode [0x200000402:0xcc:0x0] mdc close failed: rc = -108 LustreError: 32761:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000402:0xb68:0x0] error -108. LustreError: 32761:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message LustreError: 47145:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff91f3a0065000: namespace resource [0x200000007:0x1:0x0].0x0 (ffff91f3866e1e00) refcount nonzero (1) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff91f3a0065000: Connection restored to (at 192.168.202.107@tcp) LustreError: lustre-MDT0001-mdc-ffff91f3a099a000: operation ldlm_enqueue to node 192.168.202.107@tcp failed: rc = -107 LustreError: Skipped 5 previous similar messages Lustre: lustre-MDT0001-mdc-ffff91f3a099a000: Connection to lustre-MDT0001 (at 192.168.202.107@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0001-mdc-ffff91f3a099a000: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. LustreError: 46882:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000403:0x9b7:0x0] error -5. LustreError: 48551:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff91f3a099a000: inode [0x240000403:0x9a5:0x0] mdc close failed: rc = -108 LustreError: 48551:0:(file.c:248:ll_close_inode_openhandle()) Skipped 10 previous similar messages Lustre: lustre-MDT0001-mdc-ffff91f3a099a000: Connection restored to (at 192.168.202.107@tcp) 9[51953]: segfault at 8 ip 00007f2d6461e875 sp 00007fffe7d1b930 error 4 in ld-2.28.so[7f2d645fd000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 52517:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000404:0x101:0x0]: rc = -5 LustreError: 52517:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 54438:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff91f3a099a000: cannot apply new layout on [0x240000404:0xf3:0x0] : rc = -5 LustreError: 54438:0:(lov_object.c:1350:lov_layout_change()) Skipped 2 previous similar messages 1[55232]: segfault at 55acf1dc1000 ip 000055acf1dc1000 sp 00007ffebc704d20 error 14 in 1[55acf1fc1000+1000] Code: Unable to access opcode bytes at RIP 0x55acf1dc0fd6. Lustre: dir [0x200000405:0x326:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message 15[64644]: segfault at 8 ip 00007fda0b441875 sp 00007fff18ed7cf0 error 4 in ld-2.28.so[7fda0b420000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 62702:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff91f3a0065000: dir page locate: [0x200000400:0x3f:0x0] at 0: rc -5 LustreError: 62702:0:(mdc_request.c:1492:mdc_read_page()) Skipped 2 previous similar messages LustreError: 416:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 13 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 72852:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff91f3a099a000: cannot apply new layout on [0x240000404:0x574:0x0] : rc = -5 LustreError: 72852:0:(lov_object.c:1350:lov_layout_change()) Skipped 4 previous similar messages LustreError: 72852:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000404:0x574:0x0] error -5. LustreError: 72852:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message 15[75716]: segfault at 0 ip 000055d30af53b47 sp 00007fffe3e23d90 error 6 in 15[55d30af4f000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 76064:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff91f3a0065000: inode [0x200000404:0x125a:0x0] mdc close failed: rc = -2 LustreError: 76064:0:(file.c:248:ll_close_inode_openhandle()) Skipped 11 previous similar messages LustreError: 64:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 15 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 64:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 2 previous similar messages LustreError: 79902:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x16bf:0x0]: rc = -5 LustreError: 79902:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 142 previous similar messages LustreError: 79902:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 79902:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 142 previous similar messages LustreError: 79828:0:(llite_lib.c:1888:ll_update_lsm_md()) lustre: [0x240000405:0xa59:0x0] dir layout mismatch: LustreError: 79828:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=2 count=2 index=1 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 79828:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) stripe[0] [0x240000400:0x5f:0x0] LustreError: 79828:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=3 index=1 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=fnv_1a_64:2 pool= LustreError: 81267:0:(llite_lib.c:1888:ll_update_lsm_md()) lustre: [0x200000404:0x17e3:0x0] dir layout mismatch: LustreError: 81267:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=2 count=2 index=0 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 81267:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) stripe[0] [0x200000400:0x6e:0x0] LustreError: 81267:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) Skipped 4 previous similar messages LustreError: 81267:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=3 index=0 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=fnv_1a_64:2 pool= LustreError: 81269:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=2 count=2 index=0 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 81269:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=3 index=0 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=fnv_1a_64:2 pool= LustreError: 81270:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=2 count=2 index=0 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 81270:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=3 index=0 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=fnv_1a_64:2 pool= LustreError: 81277:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=2 count=2 index=0 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 81277:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=3 index=0 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=fnv_1a_64:2 pool= LustreError: 81266:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=5 count=2 index=0 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 81266:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=3 index=0 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=fnv_1a_64:2 pool= traps: 15[85442] general protection fault ip:5572efeef697 sp:7ffd19630fb8 error:0 in 15[5572efee9000+7000] LustreError: 84438:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff91f3a0065000: cannot apply new layout on [0x200000404:0x18c8:0x0] : rc = -5 LustreError: 84438:0:(lov_object.c:1350:lov_layout_change()) Skipped 7 previous similar messages Lustre: dir [0x240000404:0xaf1:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 8 previous similar messages 0[90757]: segfault at 0 ip 0000564744b81b47 sp 00007fffd7698120 error 6 in 0[564744b7d000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: lustre-MDT0001-mdc-ffff91f3a0065000: operation ldlm_enqueue to node 192.168.202.107@tcp failed: rc = -107 Lustre: lustre-MDT0001-mdc-ffff91f3a0065000: Connection to lustre-MDT0001 (at 192.168.202.107@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 3 previous similar messages LustreError: lustre-MDT0001-mdc-ffff91f3a0065000: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. LustreError: lustre-MDT0001-mdc-ffff91f3a099a000: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. LustreError: 91703:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff91f3a0065000: inode [0x240000405:0xd36:0x0] mdc close failed: rc = -108 LustreError: 90129:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x240000405:0xd6a:0x0] error: rc = -5 LustreError: 90129:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 748 previous similar messages LustreError: 91703:0:(file.c:248:ll_close_inode_openhandle()) Skipped 7 previous similar messages LustreError: 93549:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0001-mdc-ffff91f3a0065000: namespace resource [0x240000404:0xce7:0x0].0x0 (ffff91f388f13b00) refcount nonzero (2) after lock cleanup; forcing cleanup. LustreError: 93549:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 1 previous similar message LustreError: 93507:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0001-mdc-ffff91f3a0065000: [0x240000401:0x56:0x0] lock enqueue fails: rc = -108 LustreError: 93507:0:(mdc_request.c:1477:mdc_read_page()) Skipped 27 previous similar messages Lustre: lustre-MDT0001-mdc-ffff91f3a0065000: Connection restored to (at 192.168.202.107@tcp) LustreError: 93028:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000404:0xa52:0x0]: rc = -5 LustreError: 93028:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 38 previous similar messages LustreError: 93028:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 93028:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 38 previous similar messages LustreError: 95591:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff91f3a099a000: cannot apply new layout on [0x240000404:0xa52:0x0] : rc = -5 LustreError: 95591:0:(lov_object.c:1350:lov_layout_change()) Skipped 5 previous similar messages LustreError: 95591:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000404:0xa52:0x0] error -5. LustreError: 95591:0:(vvp_io.c:1909:vvp_io_init()) Skipped 5 previous similar messages 15[103818]: segfault at 8 ip 00007f2131f9c875 sp 00007ffd41403d90 error 4 in ld-2.28.so[7f2131f7b000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 15[105039]: segfault at 8 ip 00007fe319a3b875 sp 00007ffe181365a0 error 4 in ld-2.28.so[7fe319a1a000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 15[105484]: segfault at 8 ip 00007fbc4a32c875 sp 00007fff180c2c40 error 4 in ld-2.28.so[7fbc4a30b000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 105656:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 9 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 105656:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 3 previous similar messages Lustre: dir [0x200000404:0x20a0:0x0] stripe 2 readdir failed: -2, directory is partially accessed! LustreError: 104550:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff91f3a099a000: dir page locate: [0x200000405:0xf2c:0x0] at 0: rc -5 Lustre: Skipped 3 previous similar messages LustreError: 104550:0:(mdc_request.c:1492:mdc_read_page()) Skipped 4 previous similar messages 5[112067]: segfault at 0 ip 0000561245a65b47 sp 00007ffc0d12da70 error 6 in 5[561245a61000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 12[112394]: segfault at 8 ip 00007ff4294e0875 sp 00007ffc60fa04c0 error 4 in ld-2.28.so[7ff4294bf000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 5[112649]: segfault at 2 ip 00005653a166bf24 sp 00007ffdf2e96e88 error 6 in 5[5653a1666000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ff <88> 07 00 00 54 dd ff ff b4 07 00 00 74 dd ff ff c8 07 00 00 84 dd LustreError: 117847:0:(llite_lib.c:1888:ll_update_lsm_md()) lustre: [0x200000405:0x1900:0x0] dir layout mismatch: LustreError: 117847:0:(llite_lib.c:1888:ll_update_lsm_md()) Skipped 4 previous similar messages LustreError: 117847:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=2 count=2 index=0 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 117847:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) stripe[0] [0x200000400:0x96:0x0] LustreError: 117847:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) Skipped 24 previous similar messages LustreError: 117847:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=4 index=0 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=crush:2000003 pool= 3[119265]: segfault at 55def318ce9b ip 000055def318ecf5 sp 00007fff3620e7e8 error 7 in 18[55def3188000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10 42 0e <08> 4d 0b 1c 00 00 00 34 0c 00 00 e0 e0 ff ff 48 00 00 00 00 45 0e 3[119608]: segfault at 55667845be9b ip 000055667845dcf5 sp 00007fff211c6968 error 7 in 18[556678457000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10 42 0e <08> 4d 0b 1c 00 00 00 34 0c 00 00 e0 e0 ff ff 48 00 00 00 00 45 0e LustreError: 120706:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff91f3a099a000: inode [0x240000406:0x4c7:0x0] mdc close failed: rc = -2 LustreError: 120706:0:(file.c:248:ll_close_inode_openhandle()) Skipped 25 previous similar messages 6[128015]: segfault at 0 ip 0000563b8e50d620 sp 00007ffead529358 error 6 in 6[563b8e50c000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 119854:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 9 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 119854:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 1 previous similar message 10[128854]: segfault at 8 ip 00007fb6befe7875 sp 00007ffe182d47f0 error 4 in ld-2.28.so[7fb6befc6000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 10[128883]: segfault at 8 ip 00007fb3a6c72875 sp 00007ffc54198500 error 4 in ld-2.28.so[7fb3a6c51000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 12[130163]: segfault at 8 ip 00007f3f0e131875 sp 00007ffee5bc5900 error 4 in ld-2.28.so[7f3f0e110000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 12[133186]: segfault at 0 ip 0000560ae54cd32c sp 00007ffe80dc8c68 error 6 in 12[560ae54c7000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 4a 0e 80 02 03 <ff> 03 0e 88 02 4a 0e 90 02 4a 0e 98 02 42 0e a0 02 5a 0e 80 02 64 5[138753]: segfault at 8 ip 00007f0f755c9875 sp 00007fffb0eeb470 error 4 in ld-2.28.so[7f0f755a8000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: lustre-MDT0000-mdc-ffff91f3a099a000: operation ldlm_enqueue to node 192.168.202.107@tcp failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff91f3a099a000: Connection to lustre-MDT0000 (at 192.168.202.107@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 2 previous similar messages Lustre: Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff91f3a099a000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 96506:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000405:0x1252:0x0] error: rc = -108 LustreError: 96340:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff91f3a099a000: [0x200000402:0x2:0x0] lock enqueue fails: rc = -108 LustreError: 96506:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 9 previous similar messages LustreError: 133559:0:(lmv_obd.c:1468:lmv_statfs()) lustre-MDT0000-mdc-ffff91f3a099a000: can't stat MDS #0: rc = -108 LustreError: 141071:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff91f3a099a000: namespace resource [0x200000007:0x1:0x0].0x0 (ffff91f3b6fa7c00) refcount nonzero (1) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff91f3a099a000: Connection restored to (at 192.168.202.107@tcp) Lustre: Skipped 1 previous similar message LustreError: lustre-MDT0001-mdc-ffff91f3a0065000: operation mds_getattr to node 192.168.202.107@tcp failed: rc = -107 Lustre: lustre-MDT0001-mdc-ffff91f3a0065000: Connection to lustre-MDT0001 (at 192.168.202.107@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0001-mdc-ffff91f3a0065000: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. LustreError: 132143:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000405:0x1d32:0x0] error: rc = -5 LustreError: 132143:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 1018 previous similar messages LustreError: 200164:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 200164:0:(llite_lib.c:2039:ll_md_setattr()) Skipped 3 previous similar messages LustreError: 134528:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0001-mdc-ffff91f3a0065000: dir page locate: [0x240000400:0x8f:0x0] at 0: rc -5 LustreError: 134528:0:(mdc_request.c:1492:mdc_read_page()) Skipped 12 previous similar messages Lustre: lustre-MDT0001-mdc-ffff91f3a0065000: Connection restored to (at 192.168.202.107@tcp) 0[207825]: segfault at 55951d6788c7 ip 0000559519d98434 sp 00007ffca1a57208 error 6 in 0[559519d92000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 05 8d 04 8e 03 8f 02 10 00 00 00 28 00 00 00 11 b4 ff ff 05 00 19[209272]: segfault at 8 ip 00007f472af65875 sp 00007fff061dea90 error 4 in ld-2.28.so[7f472af44000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 219195:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff91f3a0065000: inode [0x200000405:0x236d:0x0] mdc close failed: rc = -2 LustreError: 219195:0:(file.c:248:ll_close_inode_openhandle()) Skipped 55 previous similar messages LustreError: 220589:0:(llite_lib.c:1888:ll_update_lsm_md()) lustre: [0x240000408:0x6d0:0x0] dir layout mismatch: LustreError: 220589:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=2 count=2 index=1 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 220589:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) stripe[0] [0x240000400:0xa8:0x0] LustreError: 220589:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) Skipped 5 previous similar messages LustreError: 220589:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=4 index=1 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=crush:2000003 pool= 4[224177]: segfault at 55f4a6b6c000 ip 000055f4a6b6c000 sp 00007fffc0624768 error 14 in 4[55f4a6d6c000+1000] Code: Unable to access opcode bytes at RIP 0x55f4a6b6bfd6. LustreError: 91224:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep sleep [0x200000406:0x935:0x0] inode@0000000000000000: rc = -5 LustreError: 224848:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff91f3a0065000: cannot apply new layout on [0x240000406:0x19a0:0x0] : rc = -5 LustreError: 224848:0:(lov_object.c:1350:lov_layout_change()) Skipped 33 previous similar messages LustreError: 224848:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000406:0x19a0:0x0] error -5. LustreError: 224848:0:(vvp_io.c:1909:vvp_io_init()) Skipped 10 previous similar messages LustreError: 226954:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000406:0x8ce:0x0]: rc = -5 LustreError: 226954:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 214 previous similar messages LustreError: 226954:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 226954:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 214 previous similar messages 11[236039]: segfault at 8 ip 00007f7e923f4875 sp 00007ffd0da02100 error 4 in ld-2.28.so[7f7e923d3000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: dir [0x240000408:0x10a9:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 142 previous similar messages LustreError: 241415:0:(llite_lib.c:1888:ll_update_lsm_md()) lustre: [0x200000405:0x30e8:0x0] dir layout mismatch: LustreError: 241415:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=2 count=2 index=0 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 241415:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) stripe[0] [0x200000400:0xd4:0x0] LustreError: 241415:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) Skipped 5 previous similar messages LustreError: 241415:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=3 index=0 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=fnv_1a_64:2 pool= LustreError: 241417:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=4 count=2 index=0 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 241417:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=3 index=0 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=fnv_1a_64:2 pool= LustreError: 241642:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=6 count=2 index=0 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 241642:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=3 index=0 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=fnv_1a_64:2 pool= 18[247035]: segfault at 8 ip 00007fbd7f619875 sp 00007ffd16b17620 error 4 in ld-2.28.so[7fbd7f5f8000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 249186:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff91f3a0065000: dir page locate: [0x200000405:0x2fa9:0x0] at 0: rc -5 LustreError: 249186:0:(mdc_request.c:1492:mdc_read_page()) Skipped 59 previous similar messages LustreError: 261110:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 261110:0:(llite_lib.c:2039:ll_md_setattr()) Skipped 2 previous similar messages 1[261457]: segfault at 7f4d2668f565 ip 0000561e390c9359 sp 00007ffd0f0c2648 error 4 in 1[561e390c3000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 28 42 0e 20 42 0e 18 42 0e 10 42 0e 08 46 0b <03> b1 0d 0e 88 02 50 0e 90 02 44 0e 98 02 44 0e a0 02 5b 0e 80 02 | Link to test |
racer test 1: racer on clients: centos-100.localnet DURATION=2700 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 108b91067 P4D 108b91067 PUD 1e6bdb067 PMD 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 15 PID: 521170 Comm: ll_sa_521053 Kdump: loaded Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 e1 cd 87 ff 48 83 05 a9 53 ce 02 01 39 05 67 34 75 01 77 cf 48 83 05 a9 53 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 b4 53 ce 02 01 83 f8 01 74 2b 48 83 05 b7 53 ce 02 RSP: 0018:ffffa6b383063e90 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008020001f RDX: 0000000080200020 RSI: ffff90d88c7f0c88 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff90d8b3c47a00 R11: 0000000000000000 R12: ffff90d88c7f0c40 R13: ffff90d8b3c47ab8 R14: ffff90d88c7f0908 R15: ffff90d88c7f0c88 FS: 0000000000000000(0000) GS:ffff90d9f25c0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000001b74cd000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_zfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) ec(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) zfs(O) spl(O) libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CR2: 0000000000000008 | Lustre: 9619:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff90d8523e7700 x1840392964684288/t4294967898(0) o101->468ce073-3af1-4076-ab14-1b017a532b32@0@lo:659/0 lens 376/864 e 0 to 0 dl 1755135569 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 hrtimer: interrupt took 2356993 ns LustreError: 6262:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000401:0x38b:0x0] ACL: rc = -2 Lustre: 9307:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0xb60:0x0] with magic=0xbd60bd0 6[47226]: segfault at 8 ip 00007f9328d32875 sp 00007ffe8bbddb00 error 4 in ld-2.28.so[7f9328d11000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 17[53192]: segfault at 8 ip 00007f31b658e875 sp 00007ffeb629c5c0 error 4 in ld-2.28.so[7f31b656d000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: mdt00_000: service thread pid 6261 was inactive for 41.927 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt00_008 state:I stack:0 pid:8973 ppid:2 flags:0x80004080 Lustre: Skipped 1 previous similar message task:mdt00_000 state:I Call Trace: stack:0 pid:6261 ppid:2 flags:0x80004080 __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] ? mdt_object_find+0x106/0x480 [mdt] Call Trace: ? lustre_msg_add_version+0x29/0xd0 [ptlrpc] __schedule+0x351/0xcb0 mdt_object_find_lock+0x72/0x1c0 [mdt] mdt_reint_setxattr+0x1ba/0x1830 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_try+0xae/0x310 [mdt] ? lu_object_find+0x1d/0x30 [obdclass] ? mdt_object_find+0x106/0x480 [mdt] mdt_getattr_name_lock+0x2249/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 6251:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff90d7e4beb800/0x4944ad0ac8613f52 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0xf5f:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4944ad0ac8613f36 expref: 441 pid: 8977 timeout: 370 lvb_type: 0 Lustre: lustre-MDT0000-mdc-ffff90d850f30800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: mdt00_008: service thread pid 8973 completed after 103.535s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffff90d850f30800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. Lustre: mdt00_000: service thread pid 6261 completed after 103.379s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 53619:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 54080:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff90d850f30800: inode [0x200000401:0xdc2:0x0] mdc close failed: rc = -108 LustreError: 53940:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff90d850f30800: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 53940:0:(mdc_request.c:1477:mdc_read_page()) Skipped 10 previous similar messages LustreError: 53221:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 54080:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff90d850f30800: namespace resource [0x200000401:0x1:0x0].0x0 (ffff90d7d135f900) refcount nonzero (2) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff90d850f30800: Connection restored to (at 0@lo) Lustre: 6261:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x26d:0x0] with magic=0xbd60bd0 Lustre: 6261:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 8[61063]: segfault at 1b30 ip 0000000000001b30 sp 00007ffc057c78f0 error 14 Code: Unable to access opcode bytes at RIP 0x1b06. Lustre: 6263:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x435:0x0] with magic=0xbd60bd0 Lustre: 6263:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: 8829:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x484:0x0] with magic=0xbd60bd0 Lustre: 8829:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: 9699:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0x1686:0x0] with magic=0xbd60bd0 Lustre: 9699:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message LustreError: 8749:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000402:0x17bf:0x0] ACL: rc = -2 LustreError: 8749:0:(mdt_handler.c:746:mdt_pack_acl2body()) Skipped 1 previous similar message Lustre: 16074:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0xbb3:0x0] with magic=0xbd60bd0 Lustre: 16074:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 16[95940]: segfault at 0 ip 000056057522eb47 sp 00007fffae553930 error 6 in 16[56057522a000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 6251:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff90d83e5f9000/0x4944ad0ac88fa024 lrc: 3/0,0 mode: CR/CR res: [0x200000403:0x17b4:0x0].0x0 bits 0xa/0x0 rrc: 6 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4944ad0ac88fa016 expref: 923 pid: 6263 timeout: 696 lvb_type: 0 LustreError: 6263:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 000000006bb8832e ns: mdt-lustre-MDT0000_UUID lock: ffff90d878b93800/0x4944ad0ac88fa5a3 lrc: 3/0,0 mode: PR/PR res: [0x200000403:0x17b4:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x4944ad0ac88fa56b expref: 34 pid: 6263 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff90d83bea4000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff90d83bea4000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff90d83bea4000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 121498:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000403:0x17b4:0x0] error -5. LustreError: 121531:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 121884:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff90d83bea4000: inode [0x200000402:0x261e:0x0] mdc close failed: rc = -108 LustreError: 121884:0:(file.c:248:ll_close_inode_openhandle()) Skipped 9 previous similar messages LustreError: 121622:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 121622:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 15 previous similar messages LustreError: 121748:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff90d83bea4000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 121748:0:(mdc_request.c:1477:mdc_read_page()) Skipped 22 previous similar messages LustreError: 121884:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff90d83bea4000: namespace resource [0x200000401:0x1:0x0].0x0 (ffff90d8a8b8ec00) refcount nonzero (2) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff90d83bea4000: Connection restored to (at 0@lo) 11[122996]: segfault at 0 ip 000055b3cf0d1b47 sp 00007ffd8a652f60 error 6 in 11[55b3cf0cd000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 6251:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff90d80d829000/0x4944ad0ac8909475 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x63:0x0].0x0 bits 0x1b/0x0 rrc: 8 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4944ad0ac8909467 expref: 77 pid: 9679 timeout: 801 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff90d83bea4000: operation mds_reint to node 0@lo failed: rc = -107 LustreError: 20350:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 00000000d1bbc80d ns: mdt-lustre-MDT0000_UUID lock: ffff90d870307800/0x4944ad0ac890b331 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x63:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x4944ad0ac890b30e expref: 24 pid: 20350 timeout: 0 lvb_type: 0 LustreError: 6246:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1755136266 with bad export cookie 5279534925011201943 Lustre: lustre-MDT0000-mdc-ffff90d83bea4000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff90d83bea4000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 123133:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 123441:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff90d83bea4000: inode [0x200000401:0x1:0x0] mdc close failed: rc = -108 LustreError: 123441:0:(file.c:248:ll_close_inode_openhandle()) Skipped 8 previous similar messages LustreError: 123315:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 123315:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 12 previous similar messages LustreError: 123045:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000404:0x63:0x0] error -108. Lustre: lustre-MDT0000-mdc-ffff90d83bea4000: Connection restored to (at 0@lo) 5[128455]: segfault at 8 ip 00007f8570ffa875 sp 00007ffcbeff38e0 error 4 in ld-2.28.so[7f8570fd9000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 6251:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff90d80b076c00/0x4944ad0ac89605b4 lrc: 3/0,0 mode: PR/PR res: [0x200000403:0x1ae4:0x0].0x0 bits 0x9/0x0 rrc: 19 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4944ad0ac896057c expref: 661 pid: 10311 timeout: 930 lvb_type: 0 LustreError: 55638:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1755136393 with bad export cookie 5279534925008161829 Lustre: lustre-MDT0000-mdc-ffff90d850f30800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff90d850f30800: operation mds_close to node 0@lo failed: rc = -107 LustreError: 55638:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff90d850f30800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 130920:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff90d850f30800: inode [0x200000403:0x1ae4:0x0] mdc close failed: rc = -5 LustreError: 130920:0:(file.c:248:ll_close_inode_openhandle()) Skipped 5 previous similar messages Lustre: lustre-MDT0000-mdc-ffff90d850f30800: Connection restored to (at 0@lo) Lustre: 7956:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000406:0xfd:0x0] with magic=0xbd60bd0 Lustre: 7956:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: mdt_io00_003: service thread pid 10604 was inactive for 42.195 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt_io00_003 state:I stack:0 pid:10604 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0x6d7/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] mdt_rename_source_lock+0x6b/0x180 [mdt] mdt_reint_rename+0x1781/0x34e0 [mdt] ? lustre_pack_reply_v2+0x230/0x380 [ptlrpc] ? ucred_set_audit_enabled.isra.12+0x10/0xa0 [mdt] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 6251:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff90d7d0d29400/0x4944ad0ac8a50b5b lrc: 3/0,0 mode: CR/CR res: [0x200000406:0x767:0x0].0x0 bits 0xa/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4944ad0ac8a50b31 expref: 201 pid: 16651 timeout: 1099 lvb_type: 0 Lustre: mdt_io00_003: service thread pid 10604 completed after 103.642s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffff90d850f30800: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff90d850f30800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 8959:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1755136565 with bad export cookie 5279534925011627347 LustreError: lustre-MDT0000-mdc-ffff90d850f30800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 153806:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff90d850f30800: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: Skipped 7 previous similar messages LustreError: 154008:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff90d850f30800: inode [0x200000406:0x67d:0x0] mdc close failed: rc = -108 LustreError: 153806:0:(mdc_request.c:1477:mdc_read_page()) Skipped 8 previous similar messages LustreError: 154008:0:(file.c:248:ll_close_inode_openhandle()) Skipped 3 previous similar messages LustreError: 153658:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 153658:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 14 previous similar messages Lustre: lustre-MDT0000-mdc-ffff90d850f30800: Connection restored to (at 0@lo) 4[160227]: segfault at 0 ip 000056058e21cb47 sp 00007ffe8bd10570 error 6 in 16[56058e218000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 166601:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff90d850f30800: inode [0x200000407:0x41d:0x0] mdc close failed: rc = -13 LustreError: 166601:0:(file.c:248:ll_close_inode_openhandle()) Skipped 4 previous similar messages 1[169326]: segfault at 0 ip 00005611dd0cad50 sp 00007fffeb25de58 error 6 in 1[5611dd0c6000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 5[169758]: segfault at 0 ip 000055a526ecab47 sp 00007ffc2f51eba0 error 6 in 5[55a526ec6000+7000] Code: Unable to access opcode bytes at RIP 0x55a526ecab1d. LustreError: 9664:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000405:0x1492:0x0] ACL: rc = -2 6[187719]: segfault at 0 ip 00005570df571b47 sp 00007ffed9d92860 error 6 in 6[5570df56d000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 6251:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff90d8702d4200/0x4944ad0ac8c0192c lrc: 3/0,0 mode: CR/CR res: [0x200000407:0xd68:0x0].0x0 bits 0xa/0x0 rrc: 16 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4944ad0ac8c01814 expref: 611 pid: 20346 timeout: 1341 lvb_type: 0 LustreError: 6246:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1755136806 with bad export cookie 5279534925011269416 LustreError: 9679:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 0000000029c9aa93 ns: mdt-lustre-MDT0000_UUID lock: ffff90d87284de00/0x4944ad0ac8c024af lrc: 3/0,0 mode: PR/PR res: [0x200000407:0xd68:0x0].0x0 bits 0x1b/0x0 rrc: 12 type: IBT gid 0 flags: 0x50200400000020 nid: 0@lo remote: 0x4944ad0ac8c02493 expref: 548 pid: 9679 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff90d83bea4000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff90d83bea4000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff90d83bea4000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 194443:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff90d83bea4000: inode [0x200000407:0xb39:0x0] mdc close failed: rc = -108 LustreError: 193657:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000407:0xd68:0x0] error -5. LustreError: 194446:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 194443:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff90d83bea4000: namespace resource [0x200000401:0x1:0x0].0x0 (ffff90d7d006b300) refcount nonzero (2) after lock cleanup; forcing cleanup. LustreError: 194443:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff90d83bea4000: Connection restored to (at 0@lo) LustreError: 6251:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff90d86a733800/0x4944ad0ac8c8dde0 lrc: 3/0,0 mode: PR/PR res: [0x200000407:0x123c:0x0].0x0 bits 0x1b/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4944ad0ac8c8ddc4 expref: 450 pid: 9497 timeout: 1472 lvb_type: 0 LustreError: 6246:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1755136937 with bad export cookie 5279534925012604155 LustreError: lustre-MDT0000-mdc-ffff90d850f30800: operation mds_reint to node 0@lo failed: rc = -107 LustreError: Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff90d850f30800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff90d850f30800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 207487:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 207487:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff90d850f30800: inode [0x200000407:0x123c:0x0] mdc close failed: rc = -108 LustreError: 207487:0:(file.c:248:ll_close_inode_openhandle()) Skipped 6 previous similar messages LustreError: 207497:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000407:0x123c:0x0] error: rc = -5 LustreError: 207497:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 31 previous similar messages Lustre: lustre-MDT0000-mdc-ffff90d850f30800: Connection restored to (at 0@lo) Lustre: 9608:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000408:0x4d0:0x0] with magic=0xbd60bd0 Lustre: 9608:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages 18[212294]: segfault at 8 ip 00007fd520dd0875 sp 00007ffebb5c6450 error 4 in ld-2.28.so[7fd520daf000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 10[212649]: segfault at 8 ip 00007fe82aeae875 sp 00007fff914108b0 error 4 in ld-2.28.so[7fe82ae8d000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 19[223068]: segfault at 0 ip 000055c95beecb47 sp 00007ffd534837f0 error 6 in 1[55c95bee8000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 236569:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff90d83bea4000: inode [0x200000409:0x812:0x0] mdc close failed: rc = -13 LustreError: 236569:0:(file.c:248:ll_close_inode_openhandle()) Skipped 7 previous similar messages Lustre: 6262:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000409:0x15cd:0x0] with magic=0xbd60bd0 Lustre: 6262:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 7 previous similar messages LustreError: 8973:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000409:0x17bb:0x0] ACL: rc = -2 5[276959]: segfault at 0 ip 000055bcdbdacb47 sp 00007ffe30f8a120 error 6 in 5[55bcdbda8000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 20350:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x200000408:0x1d75:0x0] ACL: rc = -2 3[285666]: segfault at 8 ip 00007fbdb1df9875 sp 00007fff1a8a66a0 error 4 in ld-2.28.so[7fbdb1dd8000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 6251:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 104s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff90d876846e00/0x4944ad0ac8ff8ed5 lrc: 3/0,0 mode: CR/CR res: [0x200000408:0x2205:0x0].0x0 bits 0xa/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4944ad0ac8ff8ec7 expref: 789 pid: 10299 timeout: 1803 lvb_type: 0 LustreError: 6261:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 00000000d8d13fef ns: mdt-lustre-MDT0000_UUID lock: ffff90d876845e00/0x4944ad0ac8ff9dc3 lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x2205:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x4944ad0ac8ff9d0d expref: 654 pid: 6261 timeout: 0 lvb_type: 0 LustreError: 8959:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1755137269 with bad export cookie 5279534925014389834 LustreError: lustre-MDT0000-mdc-ffff90d83bea4000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff90d83bea4000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff90d83bea4000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 6261:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff90d8b3d65b00 x1840393168795264/t0(0) o104->lustre-MDT0000@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: Skipped 1 previous similar message LustreError: 288467:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000408:0x2205:0x0] error -108. LustreError: 288668:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff90d83bea4000: dir page locate: [0x200000401:0x1:0x0] at 0: rc -5 LustreError: 288467:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff90d83bea4000: inode [0x200000408:0x2205:0x0] mdc close failed: rc = -108 LustreError: 288668:0:(mdc_request.c:1492:mdc_read_page()) Skipped 6 previous similar messages LustreError: 288670:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff90d83bea4000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 288467:0:(file.c:248:ll_close_inode_openhandle()) Skipped 2 previous similar messages LustreError: 288670:0:(mdc_request.c:1477:mdc_read_page()) Skipped 10 previous similar messages LustreError: 288623:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 288623:0:(file.c:6076:ll_inode_revalidate_fini()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff90d83bea4000: Connection restored to (at 0@lo) 15[293486]: segfault at 8 ip 00007f0b246ae875 sp 00007ffea7413b50 error 4 in ld-2.28.so[7f0b2468d000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 6251:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff90d847e8e800/0x4944ad0ac90826bf lrc: 3/0,0 mode: PR/PR res: [0x200000409:0x2075:0x0].0x0 bits 0x13/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4944ad0ac9082680 expref: 715 pid: 9664 timeout: 1933 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff90d850f30800: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: 10074:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1755137396 with bad export cookie 5279534925014955049 Lustre: lustre-MDT0000-mdc-ffff90d850f30800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff90d850f30800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 300909:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000409:0x2075:0x0] error: rc = -5 Lustre: lustre-MDT0000-mdc-ffff90d850f30800: Connection restored to (at 0@lo) Lustre: 9608:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040a:0x676:0x0] with magic=0xbd60bd0 Lustre: 9608:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 0[307707]: segfault at 0 ip 0000562fb11c2b47 sp 00007ffc8cde4790 error 6 in 0[562fb11be000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 1[314613]: segfault at 0 ip 00005592489deb47 sp 00007ffcbfa2ac90 error 6 in 1[5592489da000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 8[334578]: segfault at 0 ip 000055caecb0fb47 sp 00007ffd53285a10 error 6 in 8[55caecb0b000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 12[348979]: segfault at 0 ip 00005607ab425b47 sp 00007ffd35cb9530 error 6 in 12[5607ab421000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 10311:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x20000040b:0x12d7:0x0] ACL: rc = -2 LustreError: 20350:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x20000040b:0x1a8f:0x0] ACL: rc = -2 8[383421]: segfault at 8 ip 00007f75b4d89875 sp 00007ffede9f8700 error 4 in ld-2.28.so[7f75b4d68000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 16653:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x20000040a:0x249a:0x0] ACL: rc = -2 Lustre: 9619:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040a:0x2e32:0x0] with magic=0xbd60bd0 Lustre: 9619:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 17 previous similar messages 19[477707]: segfault at 0 ip 000055a9bf22fb47 sp 00007ffd56aeeda0 error 6 in 19[55a9bf22b000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 479184:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff90d850f30800: inode [0x20000040b:0x3f34:0x0] mdc close failed: rc = -13 LustreError: 479184:0:(file.c:248:ll_close_inode_openhandle()) Skipped 19 previous similar messages 14[496982]: segfault at 0 ip 000055d829640b47 sp 00007ffc4a4fb490 error 6 in 14[55d82963c000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: mdt00_011: service thread pid 9316 was inactive for 40.773 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt00_022 state:I task:mdt00_013 state:I stack:0 pid:9608 ppid:2 flags:0x80004080 Call Trace: Lustre: mdt00_020: service thread pid 10311 was inactive for 40.759 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] mdt_getattr_name_lock+0x274f/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] Lustre: Skipped 2 previous similar messages tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] stack:0 pid:16651 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 task:mdt00_011 state:I stack:0 pid:9316 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] schedule_timeout+0xb4/0x190 mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? __next_timer_interrupt+0x160/0x160 ? mdt_obd_postrecov+0x100/0x100 [mdt] ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] mdt_getattr_name_lock+0x274f/0x3350 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] mdt_getattr_name_lock+0x274f/0x3350 [mdt] tgt_request_handle+0x351/0x1c00 [ptlrpc] mdt_intent_getattr+0x2e2/0x630 [mdt] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ? _raw_read_unlock+0x12/0x30 ptlrpc_main+0xd30/0x1450 [ptlrpc] ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] kthread+0x1d1/0x200 tgt_handle_request0+0x137/0xaf0 [ptlrpc] ? set_kthread_struct+0x70/0x70 tgt_request_handle+0x351/0x1c00 [ptlrpc] ret_from_fork+0x1f/0x30 ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Lustre: lustre-OST0001-osc-ffff90d850f30800: disconnect after 21s idle LustreError: 6251:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff90d823402600/0x4944ad0ac98fe280 lrc: 3/0,0 mode: PR/PR res: [0x20000040a:0x4d18:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x4944ad0ac98fe25d expref: 1436 pid: 10311 timeout: 2579 lvb_type: 0 Lustre: mdt00_028: service thread pid 336730 completed after 102.326s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_002: service thread pid 6276 completed after 102.290s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_022: service thread pid 16651 completed after 102.262s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_006: service thread pid 8789 completed after 102.257s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_005: service thread pid 8749 completed after 102.252s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_020: service thread pid 10311 completed after 102.225s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_004: service thread pid 8741 completed after 102.224s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffff90d83bea4000: operation mds_getxattr to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff90d83bea4000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: mdt00_013: service thread pid 9608 completed after 102.243s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_011: service thread pid 9316 completed after 102.243s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_010: service thread pid 9307 completed after 102.243s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffff90d83bea4000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 501909:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x20000040a:0x4d18:0x0] error -5. LustreError: 502364:0:(file.c:6076:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 Lustre: lustre-MDT0000-mdc-ffff90d83bea4000: Connection restored to (at 0@lo) LustreError: 20350:0:(mdt_handler.c:746:mdt_pack_acl2body()) lustre-MDT0000: unable to read [0x20000040c:0x2e7:0x0] ACL: rc = -2 7[517272]: segfault at 8 ip 00007f28e62ba875 sp 00007ffc2b5511f0 error 4 in ld-2.28.so[7f28e6299000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 | Link to test |
racer test 2: racer rename: onyx-112vm4.onyx.whamcloud.com,onyx-112vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 1111941 Comm: ll_sa_1111796 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.58.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 23 6c 03 71 5b c3 cc cc cc cc 48 89 df e8 85 0a af ff 39 05 73 90 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffaeac889e7e08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008010000e RDX: 000000008010000f RSI: ffff9044dd6b0970 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000001 R10: ffff9044c58d7e00 R11: 0000000000000000 R12: ffff9044c58d7e00 R13: ffff9044c58d7e98 R14: ffff9044dd6b0690 R15: ffff9044c58d7ea8 FS: 0000000000000000(0000) GS:ffff90453cc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000000b4e10002 CR4: 00000000001706f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x5d6/0x1e00 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) ec(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr i2c_piix4 virtio_balloon joydev sunrpc ext4 mbcache jbd2 ata_generic virtio_net ata_piix libata crc32c_intel serio_raw net_failover failover virtio_blk [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Autotest: Test running for 275 minutes (lustre-reviews_review-dne-part-9_115706.34) | Link to test |
racer test 1: racer on clients: centos-75.localnet DURATION=2700 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 0 PID: 925687 Comm: ll_sa_925680 Kdump: loaded Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 e1 cd 87 ff 48 83 05 a9 53 ce 02 01 39 05 67 34 75 01 77 cf 48 83 05 a9 53 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 b4 53 ce 02 01 83 f8 01 74 2b 48 83 05 b7 53 ce 02 RSP: 0018:ffffaad5de90be90 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008020001e RDX: 000000008020001f RSI: ffff8ba19aed77c8 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff8ba0fefb1200 R11: 00000000000071b5 R12: ffff8ba19aed7780 R13: ffff8ba0fefb12b8 R14: ffff8ba19aed7448 R15: ffff8ba19aed77c8 FS: 0000000000000000(0000) GS:ffff8ba2f2200000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000001ca575000 CR4: 00000000000006f0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) lnet(O) dm_flakey libcfs(O) loop zfs(O) spl(O) ec(O) crc32_generic virtio_balloon i2c_piix4 pcspkr rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: 445093:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8ba126f58700 x1839954257973120/t4294967482(0) o101->123ec809-9a5e-4915-b879-999b8c857783@0@lo:537/0 lens 376/840 e 0 to 0 dl 1754717177 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 443167:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 443167:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 443167:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 443167:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 443167:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 443167:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 17[447435]: segfault at 8 ip 00007ff47dffc875 sp 00007ffd0eee0e00 error 4 in ld-2.28.so[7ff47dfdb000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 445716:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 445716:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 3 previous similar messages Lustre: 445716:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 445716:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 445716:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 445716:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 445716:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 445716:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 445716:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 445716:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 445716:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 445716:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 445952:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 445952:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 445952:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 445952:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 445952:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 445952:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 445952:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 445952:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 445952:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 445952:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 445952:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 445952:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 442892:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x157:0x0] with magic=0xbd60bd0 Lustre: 443169:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 443169:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 443169:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 443169:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 443169:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 443169:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 443169:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 443169:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 443169:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 443169:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 443169:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 443169:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 443167:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 443167:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 5 previous similar messages Lustre: 443167:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 443167:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 443167:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 443167:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 443167:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 443167:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 443167:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 443167:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 443167:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 443167:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 448657:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 448657:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 7 previous similar messages Lustre: 448657:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 448657:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 448657:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 448657:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 448657:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 448657:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 448657:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 448657:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 448657:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 448657:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 445716:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 445716:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 3 previous similar messages Lustre: 445716:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 445716:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 445716:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 445716:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 445716:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 445716:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 445716:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 445716:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 445716:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 445716:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 444896:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0xa48:0x0] with magic=0xbd60bd0 Lustre: 444896:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: 445952:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 564, rollback = 7 Lustre: 445952:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 15 previous similar messages Lustre: 445952:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 445952:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: 445952:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 445952:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: 445952:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/564/0, punch: 0/0/0, quota 1/3/0 Lustre: 445952:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: 445952:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 445952:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: 445952:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 445952:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 15 previous similar messages 12[485717]: segfault at 8 ip 00007f2fb5d1e875 sp 00007ffff8b15510 error 4 in ld-2.28.so[7f2fb5cfd000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 445073:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0xf4d:0x0] with magic=0xbd60bd0 Lustre: 445073:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: 442892:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x1152:0x0] with magic=0xbd60bd0 Lustre: 442892:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message LustreError: 497185:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8ba0e067b000: inode [0x200000401:0x11c4:0x0] mdc close failed: rc = -13 Lustre: 445716:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 516 < left 618, rollback = 7 Lustre: 445716:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 41 previous similar messages Lustre: 445716:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 445716:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 41 previous similar messages Lustre: 445716:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 445716:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 41 previous similar messages Lustre: 445716:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 445716:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 41 previous similar messages Lustre: 445716:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 445716:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 41 previous similar messages Lustre: 445716:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 445716:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 41 previous similar messages 7[503049]: segfault at 8 ip 00007f6229897875 sp 00007ffdade37fd0 error 4 in ld-2.28.so[7f6229876000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 447007:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0x13c3:0x0] with magic=0xbd60bd0 Lustre: 447007:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: mdt00_023: service thread pid 484909 was inactive for 44.024 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt00_018 state:I stack:0 pid:448739 ppid:2 flags:0x80004080 Lustre: Skipped 1 previous similar message Call Trace: __schedule+0x351/0xcb0 task:mdt00_012 state:I stack:0 pid:445345 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 task:mdt00_023 state:I stack:0 pid:484909 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? woken_wake_function+0x30/0x30 ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_try+0xae/0x310 [mdt] ? lu_object_find+0x1d/0x30 [obdclass] ? mdt_object_find+0x106/0x480 [mdt] mdt_getattr_name_lock+0x2249/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] tgt_request_handle+0x351/0x1c00 [ptlrpc] ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] mdt_object_lock+0x9e/0x240 [mdt] mdt_object_stripes_lock+0x28b/0x670 [mdt] ? mdt_object_find+0x106/0x480 [mdt] mdt_reint_setattr+0xf58/0x1f90 [mdt] mdt_object_find_lock+0x72/0x1c0 [mdt] ? ucred_set_audit_enabled.isra.12+0x28/0xa0 [mdt] mdt_reint_setxattr+0x1ba/0x1830 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] kthread+0x1d1/0x200 ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 442881:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8ba1a22db600/0xde29953195ce406e lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x17d1:0x0].0x0 bits 0x1b/0x0 rrc: 6 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xde29953195ce3fcd expref: 728 pid: 448730 timeout: 3251 lvb_type: 0 Lustre: mdt00_023: service thread pid 484909 completed after 101.385s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_018: service thread pid 448739 completed after 101.385s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_012: service thread pid 445345 completed after 101.281s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 442877:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754717489 with bad export cookie 16008490390661186409 LustreError: lustre-MDT0000-mdc-ffff8ba17897d000: operation mds_getxattr to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff8ba17897d000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 2 previous similar messages LustreError: lustre-MDT0000-mdc-ffff8ba17897d000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 516227:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8ba17897d000: inode [0x200000401:0x1:0x0] mdc close failed: rc = -108 LustreError: 515127:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 515127:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 2 previous similar messages Lustre: lustre-MDT0000-mdc-ffff8ba17897d000: Connection restored to (at 0@lo) Lustre: 445716:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 445716:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 69 previous similar messages Lustre: 445716:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 445716:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 69 previous similar messages Lustre: 445716:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 445716:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 69 previous similar messages Lustre: 445716:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 445716:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 69 previous similar messages Lustre: 445716:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 445716:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 69 previous similar messages Lustre: 445716:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 445716:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 69 previous similar messages 2[517311]: segfault at 1b30 ip 0000000000001b30 sp 00007ffdcb7c2210 error 14 Code: Unable to access opcode bytes at RIP 0x1b06. 1[517506]: segfault at 1b30 ip 0000000000001b30 sp 00007ffcbea1d990 error 14 Code: Unable to access opcode bytes at RIP 0x1b06. LustreError: 517506:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8ba0e067b000: inode [0x200000403:0x48:0x0] mdc close failed: rc = -13 LustreError: 517506:0:(file.c:248:ll_close_inode_openhandle()) Skipped 13 previous similar messages Lustre: 457429:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x199:0x0] with magic=0xbd60bd0 Lustre: 457429:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages 1[522167]: segfault at 8 ip 00007f0ac53b8875 sp 00007fff4b1ea640 error 4 in ld-2.28.so[7f0ac5397000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 6[522945]: segfault at 8 ip 00007f70d48b3875 sp 00007ffe4c07db80 error 4 in ld-2.28.so[7f70d4892000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 442881:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 104s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8ba11d8f5c00/0xde29953195d4e712 lrc: 3/0,0 mode: CR/CR res: [0x200000403:0x2cd:0x0].0x0 bits 0xa/0x0 rrc: 8 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xde29953195d4e59f expref: 155 pid: 445093 timeout: 3380 lvb_type: 0 LustreError: 443723:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 00000000c43d2587 ns: mdt-lustre-MDT0000_UUID lock: ffff8ba0ee9f7c00/0xde29953195d4f3b4 lrc: 3/0,0 mode: PR/PR res: [0x200000403:0x24b:0x0].0x0 bits 0x13/0x0 rrc: 10 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xde29953195d4f39f expref: 112 pid: 443723 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff8ba17897d000: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff8ba17897d000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 2 previous similar messages LustreError: lustre-MDT0000-mdc-ffff8ba17897d000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 524346:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000403:0x24b:0x0] error: rc = -5 LustreError: 524355:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff8ba17897d000: [0x200000403:0x24b:0x0] lock enqueue fails: rc = -108 LustreError: 524346:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 46 previous similar messages LustreError: 525432:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8ba17897d000: inode [0x200000401:0x1:0x0] mdc close failed: rc = -108 LustreError: 524997:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000403:0x2cd:0x0] error -108. Lustre: lustre-MDT0000-mdc-ffff8ba17897d000: Connection restored to (at 0@lo) Lustre: 445111:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x1bc:0x0] with magic=0xbd60bd0 Lustre: 445111:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 11[532959]: segfault at 8 ip 00007f2bf8683875 sp 00007fff01852bb0 error 4 in ld-2.28.so[7f2bf8662000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 1[538142]: segfault at 0 ip 00005619af4a7b47 sp 00007ffd02b09e60 error 6 in 1[5619af4a3000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10[543171]: segfault at 8 ip 00007f7400efb875 sp 00007fff86d5f0b0 error 4 in ld-2.28.so[7f7400eda000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 442893:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x86e:0x0] with magic=0xbd60bd0 Lustre: 442893:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 5[556323]: segfault at 8 ip 00007f5782287875 sp 00007ffe8d940e50 error 4 in ld-2.28.so[7f5782266000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 5[556409]: segfault at 8 ip 00007f21d3cae875 sp 00007ffeb4e7de90 error 4 in ld-2.28.so[7f21d3c8d000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 556409:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8ba17897d000: inode [0x200000404:0xa79:0x0] mdc close failed: rc = -13 LustreError: 556409:0:(file.c:248:ll_close_inode_openhandle()) Skipped 6 previous similar messages 4[557785]: segfault at 8 ip 00007f3255f32875 sp 00007ffdc5046610 error 4 in ld-2.28.so[7f3255f11000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 6[562770]: segfault at 0 ip 000055f52b912b47 sp 00007ffc1337b120 error 6 in 6[55f52b90e000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 6[563145]: segfault at 8 ip 00007f49c50d2875 sp 00007fff92945e00 error 4 in ld-2.28.so[7f49c50b1000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 448657:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 448657:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 117 previous similar messages Lustre: 448657:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 448657:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 117 previous similar messages Lustre: 448657:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 448657:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 117 previous similar messages Lustre: 448657:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 448657:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 117 previous similar messages Lustre: 448657:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 448657:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 117 previous similar messages Lustre: 448657:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 448657:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 117 previous similar messages 19[568225]: segfault at 0 ip 0000556f9b302b47 sp 00007ffc5e4335f0 error 6 in 19[556f9b2fe000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: lustre-OST0003-osc-ffff8ba17897d000: disconnect after 24s idle Lustre: mdt_io00_002: service thread pid 442906 was inactive for 43.118 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt_io00_002 state:I stack:0 pid:442906 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0x6d7/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] mdt_rename_source_lock+0x6b/0x180 [mdt] mdt_reint_rename+0x1781/0x34e0 [mdt] ? lustre_pack_reply_v2+0x1b0/0x380 [ptlrpc] ? ucred_set_audit_enabled.isra.12+0x10/0xa0 [mdt] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Lustre: mdt_out00_002: service thread pid 445334 was inactive for 43.007 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt_out00_002 state:I stack:0 pid:445334 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_pdo_lock+0x409/0x910 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_parent_lock+0x8f/0x370 [mdt] ? mdt_name_unpack+0xc6/0x140 [mdt] ? lu_name_is_valid_len+0x5e/0x80 [mdt] mdt_getattr_name_lock+0x278a/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] ? lustre_msg_buf+0x1b/0x70 [ptlrpc] ? __req_capsule_get+0x44e/0xa50 [ptlrpc] ? lustre_swab_ldlm_lock_desc+0x90/0x90 [ptlrpc] mdt_batch_getattr+0xf6/0x1f0 [mdt] mdt_batch+0x7ee/0x20a9 [mdt] ? lustre_msg_get_last_committed+0xb0/0x110 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 442881:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8ba0c2773800/0xde29953195f958f3 lrc: 3/0,0 mode: CR/CR res: [0x200000401:0x2eac:0x0].0x0 bits 0xa/0x0 rrc: 9 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xde29953195f956df expref: 1033 pid: 445378 timeout: 3645 lvb_type: 0 LustreError: 445111:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8ba169eab100 x1839954356237312/t0(0) o104->lustre-MDT0000@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: operation mds_close to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff8ba0e067b000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 442876:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754717882 with bad export cookie 16008490390661186017 Lustre: mdt_io00_002: service thread pid 442906 completed after 100.485s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_out00_002: service thread pid 445334 completed after 71.702s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: Skipped 7 previous similar messages LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 575988:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff8ba0e067b000: [0x200000401:0x2d92:0x0] lock enqueue fails: rc = -5 LustreError: 575988:0:(mdc_request.c:1477:mdc_read_page()) Skipped 18 previous similar messages LustreError: 576556:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8ba0e067b000: inode [0x200000401:0x2eac:0x0] mdc close failed: rc = -5 LustreError: 576556:0:(file.c:248:ll_close_inode_openhandle()) Skipped 4 previous similar messages LustreError: 576588:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 576588:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 6 previous similar messages LustreError: 576556:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 Lustre: lustre-MDT0000-mdc-ffff8ba0e067b000: Connection restored to (at 0@lo) Lustre: lustre-OST0003-osc-ffff8ba0e067b000: disconnect after 22s idle 0[579956]: segfault at 0 ip 000055e16acb2b47 sp 00007ffc7ea97450 error 6 in 0[55e16acae000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 442881:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8ba1a23ef800/0xde2995319605babf lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x176e:0x0].0x0 bits 0x1b/0x0 rrc: 6 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xde2995319605ba41 expref: 181 pid: 442891 timeout: 3799 lvb_type: 0 LustreError: 448730:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 0000000060b89315 ns: mdt-lustre-MDT0000_UUID lock: ffff8ba12cb5b400/0xde2995319605bb8a lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x176e:0x0].0x0 bits 0x20/0x0 rrc: 4 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xde2995319605ba8e expref: 3 pid: 448730 timeout: 0 lvb_type: 0 LustreError: 448730:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: Skipped 2 previous similar messages Lustre: lustre-MDT0000-mdc-ffff8ba0e067b000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 594034:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 594236:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8ba0e067b000: inode [0x200000401:0x1:0x0] mdc close failed: rc = -108 LustreError: 594236:0:(file.c:248:ll_close_inode_openhandle()) Skipped 5 previous similar messages LustreError: 594031:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000404:0x176e:0x0] error: rc = -108 Lustre: lustre-MDT0000-mdc-ffff8ba0e067b000: Connection restored to (at 0@lo) LustreError: 594031:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 8 previous similar messages 7[597618]: segfault at 8 ip 00007fe04e9f1875 sp 00007fff6d611580 error 4 in ld-2.28.so[7fe04e9d0000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 5[610242]: segfault at 0 ip 000055ceb892fb47 sp 00007ffdd05a68a0 error 6 in 5[55ceb892b000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 445379:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000406:0x837:0x0] with magic=0xbd60bd0 Lustre: 445379:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages 3[622935]: segfault at 0 ip 0000560dff5d4b47 sp 00007ffd4de70d10 error 6 in 3[560dff5d0000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 629149:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8ba17897d000: inode [0x200000406:0xbb6:0x0] mdc close failed: rc = -13 LustreError: 629149:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message 7[640386]: segfault at 8 ip 00007fb8edcae875 sp 00007ffc2ce99d60 error 4 in ld-2.28.so[7fb8edc8d000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 442881:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8ba171114800/0xde299531962ef951 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x2c64:0x0].0x0 bits 0x13/0x0 rrc: 7 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xde299531962ef943 expref: 983 pid: 450397 timeout: 4066 lvb_type: 0 LustreError: 445231:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 0000000052e2460a ns: mdt-lustre-MDT0000_UUID lock: ffff8ba16b216400/0xde299531962efb03 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x2c64:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xde299531962efa0e expref: 291 pid: 445231 timeout: 0 lvb_type: 0 LustreError: 442876:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754718304 with bad export cookie 16008490390664836825 Lustre: lustre-MDT0000-mdc-ffff8ba17897d000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff8ba17897d000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 442876:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) Skipped 1 previous similar message LustreError: 653774:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8ba17897d000: inode [0x200000406:0x1451:0x0] mdc close failed: rc = -108 LustreError: 653571:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 653571:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 2 previous similar messages Lustre: lustre-MDT0000-mdc-ffff8ba17897d000: Connection restored to (at 0@lo) LustreError: 442881:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8ba152670c00/0xde2995319631526a lrc: 3/0,0 mode: PR/PR res: [0x200000406:0x1564:0x0].0x0 bits 0x13/0x0 rrc: 15 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xde299531963151e5 expref: 582 pid: 445345 timeout: 4176 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff8ba0e067b000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 2 previous similar messages LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 657274:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff8ba0e067b000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 657274:0:(mdc_request.c:1477:mdc_read_page()) Skipped 11 previous similar messages LustreError: 657156:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 657156:0:(llite_lib.c:2040:ll_md_setattr()) Skipped 2 previous similar messages LustreError: 657293:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 657293:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 63 previous similar messages Lustre: lustre-MDT0000-mdc-ffff8ba0e067b000: Connection restored to (at 0@lo) Lustre: 448657:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 516 < left 618, rollback = 7 Lustre: 448657:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 195 previous similar messages Lustre: 448657:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 448657:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 195 previous similar messages Lustre: 448657:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 448657:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 195 previous similar messages Lustre: 448657:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 448657:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 195 previous similar messages Lustre: 448657:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 448657:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 195 previous similar messages Lustre: 448657:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 448657:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 195 previous similar messages 6[659308]: segfault at 8 ip 00007f3980fe9875 sp 00007ffd389ed690 error 4 in ld-2.28.so[7f3980fc8000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 442881:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8ba1a20a0200/0xde29953196351e6e lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x1a0:0x0].0x0 bits 0x1b/0x0 rrc: 17 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xde29953196351e60 expref: 126 pid: 448730 timeout: 4300 lvb_type: 0 LustreError: 445111:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 00000000dc4b3d3b ns: mdt-lustre-MDT0000_UUID lock: ffff8ba198007c00/0xde299531963531fb lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x1a0:0x0].0x0 bits 0x20/0x0 rrc: 12 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xde299531963531e6 expref: 19 pid: 445111 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff8ba0e067b000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 662585:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000408:0x1a0:0x0] error: rc = -5 LustreError: 662585:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 5 previous similar messages LustreError: 662408:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 662328:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8ba0e067b000: inode [0x200000408:0x1a0:0x0] mdc close failed: rc = -108 LustreError: 662408:0:(llite_lib.c:2040:ll_md_setattr()) Skipped 1 previous similar message LustreError: 662710:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000408:0x1c9:0x0] error -108. LustreError: 662710:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message LustreError: 662328:0:(file.c:248:ll_close_inode_openhandle()) Skipped 22 previous similar messages LustreError: 662803:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff8ba0e067b000: namespace resource [0x200000407:0x2c3:0x0].0x0 (ffff8ba0d0efd700) refcount nonzero (1) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff8ba0e067b000: Connection restored to (at 0@lo) LustreError: 442881:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8ba1784df000/0xde299531963e8d36 lrc: 3/0,0 mode: PR/PR res: [0x200000407:0x740:0x0].0x0 bits 0x1b/0x0 rrc: 7 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xde299531963e8bdf expref: 279 pid: 445231 timeout: 4439 lvb_type: 0 LustreError: 442893:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 00000000ba39aabf ns: mdt-lustre-MDT0000_UUID lock: ffff8ba1790fa000/0xde299531963e96c1 lrc: 4/0,0 mode: PR/PR res: [0x200000407:0x740:0x0].0x0 bits 0x1b/0x0 rrc: 7 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xde299531963e96a5 expref: 79 pid: 442893 timeout: 0 lvb_type: 0 LustreError: 442893:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: lustre-MDT0000-mdc-ffff8ba17897d000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff8ba17897d000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 2 previous similar messages LustreError: lustre-MDT0000-mdc-ffff8ba17897d000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 675920:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 675959:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000407:0x740:0x0] error -108. LustreError: 675959:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff8ba17897d000: Connection restored to (at 0@lo) LustreError: 442881:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 104s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8ba13f393e00/0xde29953196467d66 lrc: 3/0,0 mode: PR/PR res: [0x200000409:0x85e:0x0].0x0 bits 0x13/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xde29953196467d20 expref: 304 pid: 445093 timeout: 4568 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: operation mds_getxattr to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff8ba0e067b000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 687347:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8ba0e067b000: inode [0x200000409:0x5ba:0x0] mdc close failed: rc = -108 LustreError: 687086:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000409:0x85e:0x0] error -108. LustreError: 687347:0:(file.c:248:ll_close_inode_openhandle()) Skipped 11 previous similar messages LustreError: 687354:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 687354:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 15 previous similar messages Lustre: lustre-MDT0000-mdc-ffff8ba0e067b000: Connection restored to (at 0@lo) Lustre: 447001:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040b:0xf:0x0] with magic=0xbd60bd0 Lustre: 447001:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 5 previous similar messages 10[692892]: segfault at 8 ip 00007fe8a0d43875 sp 00007ffec8f578b0 error 4 in ld-2.28.so[7fe8a0d22000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 11[697037]: segfault at 8 ip 00007f8322578875 sp 00007fffaeb2aeb0 error 4 in ld-2.28.so[7f8322557000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: mdt_out00_001: service thread pid 442897 was inactive for 42.824 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt_out00_001 state:I stack:0 pid:442897 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_pdo_lock+0x535/0x910 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_parent_lock+0x8f/0x370 [mdt] ? mdt_name_unpack+0xc6/0x140 [mdt] ? lu_name_is_valid_len+0x5e/0x80 [mdt] mdt_getattr_name_lock+0x278a/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] ? lustre_msg_buf+0x1b/0x70 [ptlrpc] ? __req_capsule_get+0x44e/0xa50 [ptlrpc] ? lustre_swab_ldlm_lock_desc+0x90/0x90 [ptlrpc] mdt_batch_getattr+0xf6/0x1f0 [mdt] mdt_batch+0x7ee/0x20a9 [mdt] ? lustre_msg_get_last_committed+0xb0/0x110 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 442881:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff8ba14f15d400/0xde299531964f8e80 lrc: 3/0,0 mode: PR/PR res: [0x20000040b:0x453:0x0].0x0 bits 0x1b/0x0 rrc: 7 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xde299531964f8e72 expref: 167 pid: 445111 timeout: 4703 lvb_type: 0 LustreError: 448730:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 000000003facac74 ns: mdt-lustre-MDT0000_UUID lock: ffff8ba10bf91600/0xde299531964faa25 lrc: 3/0,0 mode: PR/PR res: [0x20000040b:0x453:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xde299531964f8d45 expref: 88 pid: 448730 timeout: 0 lvb_type: 0 Lustre: mdt_out00_001: service thread pid 442897 completed after 100.175s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff8ba0e067b000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 700266:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x20000040b:0x453:0x0] error: rc = -5 LustreError: 700449:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff8ba0e067b000: namespace resource [0x20000040b:0x453:0x0].0x0 (ffff8ba11da4e700) refcount nonzero (1) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff8ba0e067b000: Connection restored to (at 0@lo) 6[703835]: segfault at 0 ip 000055eeaec66b47 sp 00007ffdb6068580 error 6 in 6[55eeaec62000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 19[711177]: segfault at 0 ip 000055c0e472cb47 sp 00007ffc4d5af820 error 6 in 19[55c0e4728000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 18[712333]: segfault at 8 ip 00007ffa67cdd875 sp 00007fffea824350 error 4 in ld-2.28.so[7ffa67cbc000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: mdt_io00_003: service thread pid 446470 was inactive for 43.066 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt_io00_003 state:I stack:0 pid:446470 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_check_lock+0x24f/0x4d0 [mdt] mdt_reint_rename+0x1835/0x34e0 [mdt] ? lustre_pack_reply_v2+0x1b0/0x380 [ptlrpc] ? ucred_set_audit_enabled.isra.12+0x10/0xa0 [mdt] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Lustre: lustre-OST0003-osc-ffff8ba0e067b000: disconnect after 24s idle LustreError: 444896:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 000000005f0564fb ns: mdt-lustre-MDT0000_UUID lock: ffff8ba10bf91c00/0xde2995319659d2b3 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x1:0x0].0x0 bits 0x13/0x0 rrc: 21 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xde2995319659d297 expref: 57 pid: 444896 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff8ba17897d000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: mdt_io00_003: service thread pid 446470 completed after 100.422s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: Skipped 5 previous similar messages LustreError: lustre-MDT0000-mdc-ffff8ba17897d000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 715783:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x1:0x0] error: rc = -5 LustreError: 715783:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 4 previous similar messages LustreError: 715676:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x20000040a:0xde6:0x0] error -108. Lustre: 445952:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 445952:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 173 previous similar messages Lustre: 445952:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 445952:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 173 previous similar messages Lustre: 445952:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 445952:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 173 previous similar messages Lustre: 445952:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 445952:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 173 previous similar messages Lustre: 445952:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 445952:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 173 previous similar messages Lustre: 445952:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 445952:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 173 previous similar messages 1[742364]: segfault at 8 ip 00007ff93777e875 sp 00007ffce6449c50 error 4 in ld-2.28.so[7ff93775d000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 1[742582]: segfault at 8 ip 00007f8b36efa875 sp 00007ffd77b75db0 error 4 in ld-2.28.so[7f8b36ed9000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 457429:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040c:0x1107:0x0] with magic=0xbd60bd0 Lustre: 457429:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 9 previous similar messages traps: 15[757537] trap invalid opcode ip:559c9828353a sp:7ffed80aefc8 error:0 in 15[559c9827e000+7000] 8[764030]: segfault at 8 ip 00007f9b03ca4875 sp 00007fff58ca2620 error 4 in ld-2.28.so[7f9b03c83000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 18[764544]: segfault at 0 ip 00005615db1d5b47 sp 00007ffed444b1c0 error 6 in 18[5615db1d1000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 12[767073]: segfault at 8 ip 00007fc587ac4875 sp 00007ffca137f650 error 4 in ld-2.28.so[7fc587aa3000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 16[781610]: segfault at 8 ip 00007f72bbf58875 sp 00007fff8a8f7720 error 4 in ld-2.28.so[7f72bbf37000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 17[787690]: segfault at 8 ip 00007f7602bbc875 sp 00007ffd9b5361d0 error 4 in ld-2.28.so[7f7602b9b000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 7[791263]: segfault at 0 ip 0000559a723f4a88 sp 00007ffd85a09fe0 error 6 in 7[559a723f3000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 442881:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: filter-lustre-OST0002_UUID lock: ffff8ba0c3e28200/0xde2995319674bbd9 lrc: 3/0,0 mode: PW/PW res: [0x2c0000400:0x6fe:0x0].0x0 rrc: 6 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000000020020 nid: 0@lo remote: 0xde2995319674bbcb expref: 16 pid: 443163 timeout: 5024 lvb_type: 0 LustreError: 442881:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 1 previous similar message Lustre: lustre-OST0002-osc-ffff8ba0e067b000: Connection to lustre-OST0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 1 previous similar message LustreError: lustre-OST0002-osc-ffff8ba0e067b000: This client was evicted by lustre-OST0002; in progress operations using this service will fail. Lustre: 440340:0:(llite_lib.c:4232:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.77@tcp:/lustre/fid: [0x20000040c:0x1289:0x0]// may get corrupted (rc -108) Lustre: 440337:0:(llite_lib.c:4232:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.77@tcp:/lustre/fid: [0x20000040c:0x20dd:0x0]/ may get corrupted (rc -108) Lustre: 440341:0:(llite_lib.c:4232:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.77@tcp:/lustre/fid: [0x20000040c:0x1fc3:0x0]/ may get corrupted (rc -108) Lustre: 440341:0:(llite_lib.c:4232:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.77@tcp:/lustre/fid: [0x20000040d:0x1b6b:0x0]/ may get corrupted (rc -108) Lustre: 440340:0:(llite_lib.c:4232:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.77@tcp:/lustre/fid: [0x20000040c:0x1ee6:0x0]/ may get corrupted (rc -108) Lustre: 440343:0:(llite_lib.c:4232:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.77@tcp:/lustre/fid: [0x20000040d:0x1bf8:0x0]/ may get corrupted (rc -108) Lustre: lustre-OST0002-osc-ffff8ba0e067b000: Connection restored to (at 0@lo) Lustre: Skipped 1 previous similar message 9[803374]: segfault at 7fc01153d783 ip 0000560049fceb70 sp 00007ffdc3386e50 error 7 in 9[560049fc8000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 40 02 5e 0a 0e 18 41 0e 10 41 0e <08> 46 0b 00 00 00 00 00 44 00 00 00 b4 0a 00 00 d0 dd ff ff c9 00 LustreError: 484909:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 000000008873385c ns: mdt-lustre-MDT0000_UUID lock: ffff8ba1793baa00/0xde2995319695ccc3 lrc: 3/0,0 mode: PR/PR res: [0x20000040c:0x232f:0x0].0x0 bits 0x13/0x0 rrc: 5 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xde2995319695cb81 expref: 128 pid: 484909 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff8ba17897d000: operation mds_reint to node 0@lo failed: rc = -107 LustreError: 484909:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: Skipped 2 previous similar messages LustreError: lustre-MDT0000-mdc-ffff8ba17897d000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 803153:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x20000040c:0x232f:0x0] error: rc = -5 LustreError: 804132:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 803153:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 16 previous similar messages LustreError: 804132:0:(llite_lib.c:2040:ll_md_setattr()) Skipped 1 previous similar message LustreError: 803153:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8ba17897d000: inode [0x20000040c:0x232f:0x0] mdc close failed: rc = -108 LustreError: 803153:0:(file.c:248:ll_close_inode_openhandle()) Skipped 26 previous similar messages LustreError: 804308:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff8ba17897d000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 804308:0:(mdc_request.c:1477:mdc_read_page()) Skipped 21 previous similar messages 16[817012]: segfault at 8 ip 00007fe948c25875 sp 00007ffe1efd96a0 error 4 in ld-2.28.so[7fe948c04000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: lustre-OST0002-osc-ffff8ba0e067b000: disconnect after 24s idle LustreError: lustre-MDT0000-mdc-ffff8ba0e067b000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. 12[821349]: segfault at 0 ip 000056195f921b47 sp 00007ffe9b2cd470 error 6 in 17[56195f91d000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 19[823332]: segfault at 8 ip 00007f44b7c88875 sp 00007ffe91153650 error 4 in ld-2.28.so[7f44b7c67000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 16[824106]: segfault at 7fd0ea4c9b98 ip 0000563ec818bf10 sp 00007ffd1dbe58a8 error 7 in 16[563ec8187000+7000] Code: 48 83 c4 08 5b 5d 41 5c 41 5d 41 5e 41 5f c3 66 66 2e 0f 1f 84 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2[833601]: segfault at 8 ip 00007fa6a05a4875 sp 00007ffccc59dc20 error 4 in ld-2.28.so[7fa6a0583000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 2[833661]: segfault at 8 ip 00007f4de4241875 sp 00007ffeb1eff190 error 4 in ld-2.28.so[7f4de4220000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 18[841935]: segfault at 0 ip 0000559736ce1b47 sp 00007fff89dde230 error 6 in 18[559736cdd000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 14[854076]: segfault at 1 ip 0000555aa31bd950 sp 00007ffd29c1baa8 error 6 in 14[555aa31b9000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 19[855919]: segfault at 0 ip 00005644d4cf3b47 sp 00007ffd236fcca0 error 6 in 19[5644d4cef000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 791942:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 791942:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 500 previous similar messages Lustre: 791942:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 791942:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 500 previous similar messages Lustre: 791942:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 791942:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 500 previous similar messages Lustre: 791942:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 791942:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 500 previous similar messages Lustre: 791942:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 791942:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 500 previous similar messages Lustre: 791942:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 791942:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 500 previous similar messages LustreError: lustre-OST0003-osc-ffff8ba0e067b000: This client was evicted by lustre-OST0003; in progress operations using this service will fail. Lustre: 440342:0:(llite_lib.c:4232:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.77@tcp:/lustre/fid: [0x20000040f:0x16af:0x0]/ may get corrupted (rc -108) Lustre: 440346:0:(llite_lib.c:4232:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.77@tcp:/lustre/fid: [0x20000040f:0x931:0x0]// may get corrupted (rc -108) Lustre: 440343:0:(llite_lib.c:4232:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.77@tcp:/lustre/fid: [0x20000040f:0x16ae:0x0]/ may get corrupted (rc -108) Lustre: 440344:0:(llite_lib.c:4232:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.77@tcp:/lustre/fid: [0x20000040f:0x17b7:0x0]/ may get corrupted (rc -108) LustreError: 892021:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-OST0003-osc-ffff8ba0e067b000: namespace resource [0x300000400:0x8f5:0x0].0x0 (ffff8ba14f352b00) refcount nonzero (2) after lock cleanup; forcing cleanup. LustreError: 894434:0:(statahead.c:2457:start_statahead_thread()) lustre: unsupported statahead pattern 0X0. 11[895678]: segfault at 8 ip 00007fd7b9849875 sp 00007ffd82a166d0 error 4 in ld-2.28.so[7fd7b9828000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 traps: 18[904803] trap invalid opcode ip:55c4dd29cd02 sp:7ffed14b53a0 error:0 in 18[55c4dd296000+7000] 13[907443]: segfault at 8 ip 00007fb184e31875 sp 00007ffcb182ed10 error 4 in ld-2.28.so[7fb184e10000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 448739:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040e:0x2490:0x0] with magic=0xbd60bd0 Lustre: 448739:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 9 previous similar messages | Link to test |
racer test 1: racer on clients: centos-0.localnet DURATION=2700 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 6 PID: 527926 Comm: ll_sa_527823 Kdump: loaded Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 e1 cd 87 ff 48 83 05 a9 53 ce 02 01 39 05 67 34 75 01 77 cf 48 83 05 a9 53 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 b4 53 ce 02 01 83 f8 01 74 2b 48 83 05 b7 53 ce 02 RSP: 0018:ffffacfc4cdcbe90 EFLAGS: 00010212 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000080200015 RDX: 0000000080200016 RSI: ffff97b899ad0c88 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff97b7ff1c5a00 R11: 0000000000000000 R12: ffff97b899ad0c40 R13: ffffffffc1d3dcb0 R14: ffff97b899ad0908 R15: ffff97b899ad0c88 FS: 0000000000000000(0000) GS:ffff97b9b2380000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000196ddb000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? do_raw_spin_unlock+0x75/0x190 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) ec(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CR2: 0000000000000008 | Lustre: 5757:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff97b81b38d080 x1839912896754688/t4294968456(0) o101->e43bfcf4-e911-47ff-bee1-dedbdad26019@0@lo:351/0 lens 376/864 e 0 to 0 dl 1754677731 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 6036:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 528, rollback = 7 Lustre: 6036:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6036:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 6036:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/528/0, punch: 0/0/0, quota 4/150/0 Lustre: 6036:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6036:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 15[10060]: segfault at 8 ip 00007fa3f8f58875 sp 00007fffc7aff850 error 4 in ld-2.28.so[7fa3f8f37000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 9403:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 9403:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 9403:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 9403:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9403:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 9403:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9403:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/78/0 Lustre: 9403:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9403:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 9403:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9403:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 9403:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6035:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6035:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 6035:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6035:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6035:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6035:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6035:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6035:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6035:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6035:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6035:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6035:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9404:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 9404:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 9404:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 9404:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9404:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 9404:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9404:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 9404:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9404:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 9404:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9404:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 9404:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9404:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 594, rollback = 7 Lustre: 9404:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 9404:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 9404:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9404:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 9404:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9404:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/594/0, punch: 0/0/0, quota 1/3/0 Lustre: 9404:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9404:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 9404:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9404:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 9404:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message 6[28816]: segfault at 8 ip 00007f4690fbf875 sp 00007ffe9402fc50 error 4 in ld-2.28.so[7f4690f9e000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 9403:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 516 < left 618, rollback = 7 Lustre: 9403:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 9403:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 9403:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9403:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 9403:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9403:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 9403:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9403:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 9403:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 9403:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 9403:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8251:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 8251:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 5 previous similar messages Lustre: 8251:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8251:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 8251:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8251:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 8251:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 8251:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 8251:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8251:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 8251:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8251:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 5 previous similar messages 12[41456]: segfault at 8 ip 00007f2e20848875 sp 00007ffe31af7210 error 4 in ld-2.28.so[7f2e20827000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 6035:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6035:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 47 previous similar messages Lustre: 6035:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6035:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 47 previous similar messages Lustre: 6035:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6035:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 47 previous similar messages Lustre: 6035:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6035:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 47 previous similar messages Lustre: 6035:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6035:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 47 previous similar messages Lustre: 6035:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6035:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 47 previous similar messages 1[57835]: segfault at 8 ip 00007f42df107875 sp 00007ffe9dbf3640 error 4 in ld-2.28.so[7f42df0e6000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: mdt00_010: service thread pid 8327 was inactive for 41.544 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Lustre: mdt00_018: service thread pid 9831 was inactive for 42.144 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. task:mdt00_016 state:I Lustre: Skipped 1 previous similar message task:mdt00_010 state:I task:mdt00_008 state:I stack:0 pid:8125 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] mdt_intent_getxattr+0x9f/0x440 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_layout+0x13d0/0x13d0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 stack:0 pid:9447 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 stack:0 pid:8327 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? woken_wake_function+0x30/0x30 mdt_object_lock_internal+0x20b/0x5a0 [mdt] ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_try+0xae/0x310 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_try+0xae/0x310 [mdt] mdt_getattr_name_lock+0x2249/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? lu_object_find+0x1d/0x30 [obdclass] ? _raw_read_unlock+0x12/0x30 ? mdt_object_find+0x106/0x480 [mdt] ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] mdt_getattr_name_lock+0x2249/0x3350 [mdt] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] mdt_intent_getattr+0x2e2/0x630 [mdt] tgt_enqueue+0xd0/0x300 [ptlrpc] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] tgt_request_handle+0x351/0x1c00 [ptlrpc] mdt_intent_policy+0x14b/0x670 [mdt] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ? _raw_read_unlock+0x12/0x30 ptlrpc_main+0xd30/0x1450 [ptlrpc] ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] kthread+0x1d1/0x200 tgt_enqueue+0xd0/0x300 [ptlrpc] ? set_kthread_struct+0x70/0x70 tgt_handle_request0+0x137/0xaf0 [ptlrpc] ret_from_fork+0x1f/0x30 tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 104s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff97b7ad63a600/0xa2e5d0fcada83869 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x1270:0x0].0x0 bits 0x13/0x0 rrc: 9 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xa2e5d0fcada8383f expref: 521 pid: 6384 timeout: 316 lvb_type: 0 Lustre: mdt00_018: service thread pid 9831 completed after 103.594s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_010: service thread pid 8327 completed after 102.994s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_008: service thread pid 8125 completed after 103.486s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 9447:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 00000000573e6651 ns: mdt-lustre-MDT0000_UUID lock: ffff97b7b31ae000/0xa2e5d0fcada83cfa lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x1270:0x0].0x0 bits 0x13/0x0 rrc: 6 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xa2e5d0fcada83cc2 expref: 102 pid: 9447 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff97b830b40000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff97b830b40000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: mdt00_016: service thread pid 9447 completed after 103.539s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffff97b830b40000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 63424:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000401:0x1270:0x0] error -5. LustreError: 63596:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff97b830b40000: inode [0x200000401:0x1:0x0] mdc close failed: rc = -108 LustreError: 63428:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 63428:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 2 previous similar messages LustreError: 63596:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff97b830b40000: namespace resource [0x200000401:0x1:0x0].0x0 (ffff97b7f5337400) refcount nonzero (6) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff97b830b40000: Connection restored to (at 0@lo) Lustre: 9404:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 515 < left 618, rollback = 7 Lustre: 9404:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 45 previous similar messages Lustre: 9404:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 9404:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 45 previous similar messages Lustre: 9404:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 9404:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 45 previous similar messages Lustre: 9404:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 9404:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 45 previous similar messages Lustre: 9404:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 9404:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 45 previous similar messages Lustre: 9404:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 9404:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 45 previous similar messages hrtimer: interrupt took 2100240 ns 0[80137]: segfault at 8 ip 00007f863ebd8875 sp 00007ffeddaa6390 error 4 in ld-2.28.so[7f863ebb7000+2f000] 0[80144]: segfault at 8 ip 00007fad5d0ba875 sp 00007ffc98ac5a40 error 4 in ld-2.28.so[7fad5d099000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 83857:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff97b7f53ea800: inode [0x200000403:0x764:0x0] mdc close failed: rc = -13 LustreError: 83857:0:(file.c:248:ll_close_inode_openhandle()) Skipped 6 previous similar messages Lustre: 57798:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x880:0x0] with magic=0xbd60bd0 LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: filter-lustre-OST0003_UUID lock: ffff97b81faa7e00/0xa2e5d0fcada9c539 lrc: 3/0,0 mode: PW/PW res: [0x300000400:0x142:0x0].0x0 rrc: 5 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000400020020 nid: 0@lo remote: 0xa2e5d0fcada9c532 expref: 11 pid: 6032 timeout: 424 lvb_type: 0 LustreError: lustre-OST0003-osc-ffff97b830b40000: operation ost_sync to node 0@lo failed: rc = -107 Lustre: lustre-OST0003-osc-ffff97b830b40000: Connection to lustre-OST0003 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-OST0003-osc-ffff97b830b40000: This client was evicted by lustre-OST0003; in progress operations using this service will fail. Lustre: 3158:0:(llite_lib.c:4232:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.2@tcp:/lustre/fid: [0x200000403:0x6c:0x0]/ may get corrupted (rc -108) LustreError: 92553:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-OST0003-osc-ffff97b830b40000: namespace resource [0x300000400:0x142:0x0].0x0 (ffff97b7dada1300) refcount nonzero (2) after lock cleanup; forcing cleanup. LustreError: 92553:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 1 previous similar message Lustre: lustre-OST0003-osc-ffff97b830b40000: Connection restored to (at 0@lo) Lustre: lustre-OST0003-osc-ffff97b830b40000: disconnect after 20s idle LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff97b7ad62b600/0xa2e5d0fcadbd9958 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x1d41:0x0].0x0 bits 0x1b/0x0 rrc: 15 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xa2e5d0fcadbd98fd expref: 791 pid: 9831 timeout: 482 lvb_type: 0 LustreError: 8551:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 00000000c6c8fc14 ns: mdt-lustre-MDT0000_UUID lock: ffff97b816e55a00/0xa2e5d0fcadbddf74 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x1d41:0x0].0x0 bits 0x1b/0x0 rrc: 12 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xa2e5d0fcadbddee8 expref: 81 pid: 8551 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff97b7f53ea800: operation mds_reint to node 0@lo failed: rc = -107 LustreError: 11386:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754678129 with bad export cookie 11738017787472359288 Lustre: lustre-MDT0000-mdc-ffff97b7f53ea800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff97b7f53ea800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 92316:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x1d41:0x0] error: rc = -5 LustreError: 92316:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 5 previous similar messages LustreError: 92516:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff97b7f53ea800: dir page locate: [0x200000401:0x1:0x0] at 0: rc -5 LustreError: 92516:0:(mdc_request.c:1492:mdc_read_page()) Skipped 2 previous similar messages LustreError: 92568:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff97b7f53ea800: inode [0x200000401:0x1d3f:0x0] mdc close failed: rc = -108 LustreError: 92523:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff97b7f53ea800: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 91940:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000401:0x1d41:0x0] error -108. Lustre: lustre-MDT0000-mdc-ffff97b7f53ea800: Connection restored to (at 0@lo) Lustre: 9404:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 9404:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 77 previous similar messages Lustre: 9404:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 9404:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 77 previous similar messages Lustre: 9404:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 9404:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 77 previous similar messages Lustre: 9404:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 9404:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 77 previous similar messages Lustre: 9404:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 9404:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 77 previous similar messages Lustre: 9404:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 9404:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 77 previous similar messages 17[100505]: segfault at 0 ip 000055f0f488fb47 sp 00007ffe9d329a00 error 6 in 17[55f0f488b000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 5[102422]: segfault at 0 ip 000055b4ffe45b47 sp 00007fff2315bb10 error 6 in 5[55b4ffe41000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 6[105502]: segfault at 0 ip 0000560467dc3b47 sp 00007ffe8c33cf70 error 6 in 6 (deleted)[560467dbf000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 9831:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x10e9:0x0] with magic=0xbd60bd0 Lustre: 9831:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: 8116:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x754:0x0] with magic=0xbd60bd0 Lustre: 8116:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 3[116651]: segfault at 8 ip 00007f34046d5875 sp 00007fff8a9d2200 error 4 in ld-2.28.so[7f34046b4000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 9447:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0xfa0:0x0] with magic=0xbd60bd0 Lustre: 9447:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 5[146204]: segfault at 560dc968ff24 ip 0000560dc968ff24 sp 00007ffe71533f48 error 7 in 5 (deleted)[560dc968b000+7000] Code: 84 00 00 00 00 00 f3 0f 1e fa c3 66 2e 0f 1f 84 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 13[148523]: segfault at 0 ip 00005622fedcab47 sp 00007ffd351b7310 error 6 in 13[5622fedc6000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 8116:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x24ac:0x0] with magic=0xbd60bd0 Lustre: 8116:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 104s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff97b80239b200/0xa2e5d0fcadc6135f lrc: 3/0,0 mode: CR/CR res: [0x200000404:0x40e:0x0].0x0 bits 0xa/0x0 rrc: 9 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xa2e5d0fcadc610e9 expref: 1031 pid: 8116 timeout: 607 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff97b830b40000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff97b830b40000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 1 previous similar message LustreError: 11055:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff97b81a8b1180 x1839912996996096/t0(0) o104->lustre-MDT0000@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 11055:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 00000000e2da67c1 ns: mdt-lustre-MDT0000_UUID lock: ffff97b80ed1fa00/0xa2e5d0fcadc6183d lrc: 4/0,0 mode: CR/CR res: [0x200000404:0x40e:0x0].0x0 bits 0xa/0x0 rrc: 6 type: IBT gid 0 flags: 0x70200400000020 nid: 0@lo remote: 0xa2e5d0fcadc612ef expref: 77 pid: 11055 timeout: 711 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff97b830b40000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 11055:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: 161191:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff97b830b40000: inode [0x200000403:0x2a26:0x0] mdc close failed: rc = -5 LustreError: 161191:0:(file.c:248:ll_close_inode_openhandle()) Skipped 10 previous similar messages LustreError: 161191:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000403:0x2a26:0x0] error: rc = -108 LustreError: 161191:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 79 previous similar messages LustreError: 104558:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 104222:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000404:0x40e:0x0] error -5. LustreError: 104222:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message LustreError: 161290:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff97b830b40000: namespace resource [0x200000401:0x1:0x0].0x0 (ffff97b7f5081d00) refcount nonzero (2) after lock cleanup; forcing cleanup. LustreError: 161290:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff97b830b40000: Connection restored to (at 0@lo) 15[165255]: segfault at 8 ip 00007f3ccd523875 sp 00007fff21fe9300 error 4 in ld-2.28.so[7f3ccd502000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 10[165175]: segfault at 0 ip 000055d886a3fb47 sp 00007ffe6df907c0 error 6 in 10[55d886a3b000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 16[169235]: segfault at 8 ip 00007f73139e2875 sp 00007ffc8e0c3bd0 error 4 in ld-2.28.so[7f73139c1000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 2[170069]: segfault at 5640190b8000 ip 00005640190b8000 sp 00007fffbf9fd0a0 error 14 in 2[5640192b8000+1000] Code: Unable to access opcode bytes at RIP 0x5640190b7fd6. Lustre: 7779:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0x61d:0x0] with magic=0xbd60bd0 Lustre: 7779:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages 5[182868]: segfault at 8 ip 00007fa1c4af9875 sp 00007ffd5d2a3470 error 4 in ld-2.28.so[7fa1c4ad8000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 8542:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x285e:0x0] with magic=0xbd60bd0 Lustre: 8542:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 0[200271]: segfault at 0 ip 0000556d5f244b47 sp 00007ffdb0d2afe0 error 6 in 0[556d5f240000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 67467:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 516 < left 618, rollback = 7 Lustre: 67467:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 181 previous similar messages Lustre: 67467:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 67467:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 181 previous similar messages Lustre: 67467:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 67467:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 181 previous similar messages Lustre: 67467:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 67467:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 181 previous similar messages Lustre: 67467:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 67467:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 181 previous similar messages Lustre: 67467:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 67467:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 181 previous similar messages 10[230720]: segfault at 8 ip 00007f6888fa8875 sp 00007ffe27ae6af0 error 4 in ld-2.28.so[7f6888f87000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 19[236525]: segfault at 0 ip 00005611135f1b47 sp 00007ffeff4a3920 error 6 in 19[5611135ed000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 7922:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0x1a29:0x0] with magic=0xbd60bd0 Lustre: 7922:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff97b7ad6ddc00/0xa2e5d0fcae175cf3 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x3600:0x0].0x0 bits 0x1b/0x0 rrc: 7 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xa2e5d0fcae175cd0 expref: 766 pid: 8542 timeout: 884 lvb_type: 0 LustreError: 9831:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 0000000041792dd1 ns: mdt-lustre-MDT0000_UUID lock: ffff97b862d88200/0xa2e5d0fcae17686f lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x3600:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xa2e5d0fcae1767c7 expref: 291 pid: 9831 timeout: 0 lvb_type: 0 LustreError: 5741:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754678530 with bad export cookie 11738017787478502306 LustreError: lustre-MDT0000-mdc-ffff97b830b40000: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: Skipped 2 previous similar messages Lustre: lustre-MDT0000-mdc-ffff97b830b40000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff97b830b40000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 237111:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000404:0x3600:0x0] error -108. LustreError: 9831:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 1 previous similar message LustreError: 237459:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff97b830b40000: inode [0x200000404:0x357c:0x0] mdc close failed: rc = -108 LustreError: 237459:0:(file.c:248:ll_close_inode_openhandle()) Skipped 18 previous similar messages LustreError: 237411:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 237411:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 238 previous similar messages Lustre: lustre-MDT0000-mdc-ffff97b830b40000: Connection restored to (at 0@lo) LustreError: 238523:0:(statahead.c:2457:start_statahead_thread()) lustre: unsupported statahead pattern 0X0. LustreError: 246623:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff97b7f53ea800: inode [0x200000404:0x393c:0x0] mdc close failed: rc = -16 LustreError: 246623:0:(file.c:248:ll_close_inode_openhandle()) Skipped 16 previous similar messages LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff97b7adc66000/0xa2e5d0fcae1ebc53 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x39fa:0x0].0x0 bits 0x13/0x0 rrc: 7 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xa2e5d0fcae1ebbea expref: 170 pid: 9325 timeout: 1002 lvb_type: 0 Lustre: lustre-MDT0000-mdc-ffff97b830b40000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 9830:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 0000000040fa03be ns: mdt-lustre-MDT0000_UUID lock: ffff97b7fb56f600/0xa2e5d0fcae1ebd9c lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x39fa:0x0].0x0 bits 0x20/0x0 rrc: 4 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xa2e5d0fcae1ebc61 expref: 142 pid: 9830 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff97b830b40000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: lustre-MDT0000-mdc-ffff97b830b40000: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: 248416:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff97b830b40000: inode [0x200000404:0x39fa:0x0] mdc close failed: rc = -108 LustreError: 248370:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000404:0x39fa:0x0] error: rc = -5 LustreError: 248489:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000404:0x39fa:0x0] error -5. LustreError: 248370:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 143 previous similar messages Lustre: lustre-MDT0000-mdc-ffff97b830b40000: Connection restored to (at 0@lo) 14[257514]: segfault at 8 ip 00007f9113e73875 sp 00007ffc33cc3f70 error 4 in ld-2.28.so[7f9113e52000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 9831:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000407:0x35d:0x0] with magic=0xbd60bd0 Lustre: 9831:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 4[277210]: segfault at 8 ip 00007fdb0abf0875 sp 00007fff546719e0 error 4 in ld-2.28.so[7fdb0abcf000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 3[295036]: segfault at 8 ip 00007ff895c16875 sp 00007ffe3050f150 error 4 in ld-2.28.so[7ff895bf5000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 16[295291]: segfault at 8 ip 00007f4c0d674875 sp 00007ffff197c8e0 error 4 in ld-2.28.so[7f4c0d653000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 3[296668]: segfault at 0 ip 000055aead7deb47 sp 00007ffd55dbcc90 error 6 in 3[55aead7da000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 traps: 16[301344] general protection fault ip:56023184f0d6 sp:7ffc614bc8c8 error:0 in 16[56023184a000+7000] 10[314775]: segfault at 8 ip 00007f052f0b6875 sp 00007ffe3085e9f0 error 4 in ld-2.28.so[7f052f095000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 11[321276]: segfault at 8 ip 00007f60e77eb875 sp 00007ffed5aa4000 error 4 in ld-2.28.so[7f60e77ca000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: lustre-OST0002-osc-ffff97b7f53ea800: disconnect after 21s idle Lustre: lustre-OST0000-osc-ffff97b7f53ea800: disconnect after 23s idle Lustre: Skipped 1 previous similar message LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 104s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff97b80d391a00/0xa2e5d0fcae54a558 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x55e0:0x0].0x0 bits 0x1b/0x0 rrc: 14 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xa2e5d0fcae54a4a2 expref: 1231 pid: 7922 timeout: 1291 lvb_type: 0 LustreError: 22967:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 0000000084cba00f ns: mdt-lustre-MDT0000_UUID lock: ffff97b7dba35000/0xa2e5d0fcae54f96d lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x55e0:0x0].0x0 bits 0x1b/0x0 rrc: 11 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xa2e5d0fcae54f7c9 expref: 202 pid: 22967 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff97b7f53ea800: operation mds_reint to node 0@lo failed: rc = -107 LustreError: 22967:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 1 previous similar message LustreError: 5741:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754678940 with bad export cookie 11738017787476308177 Lustre: lustre-MDT0000-mdc-ffff97b7f53ea800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff97b7f53ea800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: Skipped 1 previous similar message LustreError: 326372:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000404:0x55e0:0x0] error: rc = -5 LustreError: 324741:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000407:0x1a64:0x0] error -108. LustreError: 326434:0:(lmv_obd.c:1468:lmv_statfs()) lustre-MDT0000-mdc-ffff97b7f53ea800: can't stat MDS #0: rc = -108 LustreError: 324741:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff97b7f53ea800: inode [0x200000407:0x1a64:0x0] mdc close failed: rc = -108 LustreError: 326067:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 324741:0:(file.c:248:ll_close_inode_openhandle()) Skipped 6 previous similar messages LustreError: 326606:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff97b7f53ea800: namespace resource [0x200000007:0x1:0x0].0x0 (ffff97b7c4255500) refcount nonzero (8) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff97b7f53ea800: Connection restored to (at 0@lo) Lustre: 6035:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 582, rollback = 7 Lustre: 6035:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 291 previous similar messages Lustre: 6035:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6035:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 291 previous similar messages Lustre: 6035:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6035:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 291 previous similar messages Lustre: 6035:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/582/0, punch: 0/0/0, quota 1/3/0 Lustre: 6035:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 291 previous similar messages Lustre: 6035:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6035:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 291 previous similar messages Lustre: 6035:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6035:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 291 previous similar messages LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff97b7b30eea00/0xa2e5d0fcae565c4b lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x91:0x0].0x0 bits 0x13/0x0 rrc: 14 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xa2e5d0fcae565c21 expref: 779 pid: 8116 timeout: 1398 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff97b830b40000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff97b830b40000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 2 previous similar messages LustreError: lustre-MDT0000-mdc-ffff97b830b40000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 328226:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 328226:0:(llite_lib.c:2040:ll_md_setattr()) Skipped 2 previous similar messages LustreError: 328222:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000408:0x91:0x0] error: rc = -5 LustreError: 328231:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000408:0x91:0x0] error -5. LustreError: 328231:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message LustreError: 328544:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff97b830b40000: inode [0x200000407:0x1ad9:0x0] mdc close failed: rc = -108 LustreError: 328544:0:(file.c:248:ll_close_inode_openhandle()) Skipped 7 previous similar messages LustreError: 328222:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 144 previous similar messages LustreError: 328504:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff97b830b40000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 328504:0:(mdc_request.c:1477:mdc_read_page()) Skipped 4 previous similar messages LustreError: 328544:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff97b830b40000: namespace resource [0x200000007:0x1:0x0].0x0 (ffff97b8163f0200) refcount nonzero (2) after lock cleanup; forcing cleanup. LustreError: 328544:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff97b830b40000: Connection restored to (at 0@lo) Lustre: 9325:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000409:0x928:0x0] with magic=0xbd60bd0 Lustre: 9325:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 7 previous similar messages traps: 9[354918] trap invalid opcode ip:5565540ea7e9 sp:7ffe05cbe858 error:0 in 9[5565540e4000+7000] 17[379273]: segfault at 0 ip 000056495137cb47 sp 00007fffe55a4ed0 error 6 in 17[564951378000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff97b789bc1200/0xa2e5d0fcae7f88a0 lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x16f6:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xa2e5d0fcae7f8884 expref: 568 pid: 8542 timeout: 1638 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff97b7f53ea800: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: 5740:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754679284 with bad export cookie 11738017787486207283 Lustre: lustre-MDT0000-mdc-ffff97b7f53ea800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 2 previous similar messages LustreError: lustre-MDT0000-mdc-ffff97b7f53ea800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 387912:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff97b7f53ea800: inode [0x200000409:0x1638:0x0] mdc close failed: rc = -108 LustreError: 387773:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000408:0x16f6:0x0] error: rc = -5 LustreError: 387773:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 35 previous similar messages LustreError: 387912:0:(file.c:248:ll_close_inode_openhandle()) Skipped 12 previous similar messages LustreError: 387912:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff97b7f53ea800: namespace resource [0x200000007:0x1:0x0].0x0 (ffff97b8024f3000) refcount nonzero (1) after lock cleanup; forcing cleanup. LustreError: 387912:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff97b7f53ea800: Connection restored to (at 0@lo) LustreError: 8141:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 0000000000ba8e74 ns: mdt-lustre-MDT0000_UUID lock: ffff97b7b31ff400/0xa2e5d0fcae808001 lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x1608:0x0].0x0 bits 0x13/0x0 rrc: 5 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xa2e5d0fcae807fd7 expref: 256 pid: 8141 timeout: 0 lvb_type: 0 LustreError: 5739:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754679390 with bad export cookie 11738017787486307208 LustreError: lustre-MDT0000-mdc-ffff97b830b40000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 8141:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: 9447:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff97b825dc2300 x1839913171047040/t0(0) o104->lustre-MDT0000@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 389239:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000408:0x1608:0x0] error: rc = -5 LustreError: 389239:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 31 previous similar messages 17[390210]: segfault at 0 ip 0000559bd20bab47 sp 00007ffdd54a0e30 error 6 in 17[559bd20b6000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 17[392315]: segfault at 8 ip 00007f467c8b0875 sp 00007ffdcd73dff0 error 4 in ld-2.28.so[7f467c88f000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 5756:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040a:0x5c:0x0] with magic=0xbd60bd0 Lustre: 5756:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 5 previous similar messages 1[399609]: segfault at 8 ip 00007f93eeaae875 sp 00007ffddf1e8540 error 4 in ld-2.28.so[7f93eea8d000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 3[400260]: segfault at 0 ip 00005617c534fb47 sp 00007ffffd4ca750 error 6 in 3[5617c534b000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0[430104]: segfault at 8 ip 00007faa752d0875 sp 00007ffdf07d58f0 error 4 in ld-2.28.so[7faa752af000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 6037:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6037:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 247 previous similar messages Lustre: 6037:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6037:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 247 previous similar messages Lustre: 6037:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6037:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 247 previous similar messages Lustre: 6037:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6037:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 247 previous similar messages Lustre: 6037:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6037:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 247 previous similar messages Lustre: 6037:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6037:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 247 previous similar messages LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff97b7b60f1600/0xa2e5d0fcaeaf1d48 lrc: 3/0,0 mode: PR/PR res: [0x20000040a:0x17c8:0x0].0x0 bits 0x1b/0x0 rrc: 7 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xa2e5d0fcaeaf1d09 expref: 787 pid: 417868 timeout: 2023 lvb_type: 0 LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 1 previous similar message LustreError: 8465:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754679669 with bad export cookie 11738017787489001032 LustreError: lustre-MDT0000-mdc-ffff97b7f53ea800: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff97b7f53ea800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff97b7f53ea800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: Skipped 3 previous similar messages LustreError: 456434:0:(statahead.c:1807:is_first_dirent()) lustre: reading dir [0x200000401:0x1:0x0] at 0 stat_pid = 456649 : rc = -5 LustreError: 456434:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x20000040a:0x17c8:0x0] error: rc = -108 LustreError: 456700:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff97b7f53ea800: inode [0x20000040b:0x13f6:0x0] mdc close failed: rc = -108 LustreError: 456434:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 64 previous similar messages LustreError: 456700:0:(file.c:248:ll_close_inode_openhandle()) Skipped 13 previous similar messages LustreError: 456649:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff97b7f53ea800: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 456649:0:(mdc_request.c:1477:mdc_read_page()) Skipped 9 previous similar messages Lustre: lustre-MDT0000-mdc-ffff97b7f53ea800: Connection restored to (at 0@lo) Lustre: Skipped 1 previous similar message 17[466036]: segfault at 8 ip 00007fc80d185875 sp 00007ffd8938bff0 error 4 in ld-2.28.so[7fc80d164000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 4[470291]: segfault at 8 ip 00007f1ebdb41875 sp 00007ffed25fa700 error 4 in ld-2.28.so[7f1ebdb20000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: lustre-MDT0000-mdc-ffff97b830b40000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 7937:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 0000000008dd23ac ns: mdt-lustre-MDT0000_UUID lock: ffff97b80f4fea00/0xa2e5d0fcaebc7d52 lrc: 3/0,0 mode: PR/PR res: [0x20000040b:0x1c80:0x0].0x0 bits 0x1b/0x0 rrc: 9 type: IBT gid 0 flags: 0x50200400000020 nid: 0@lo remote: 0xa2e5d0fcaebc7d36 expref: 155 pid: 7937 timeout: 0 lvb_type: 0 LustreError: 474530:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x20000040b:0x1c80:0x0] error -5. LustreError: 474530:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message LustreError: 474526:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff97b830b40000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 474526:0:(mdc_request.c:1477:mdc_read_page()) Skipped 11 previous similar messages LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff97b791d6fe00/0xa2e5d0fcaebd3f53 lrc: 3/0,0 mode: PR/PR res: [0x20000040c:0x690:0x0].0x0 bits 0x1b/0x0 rrc: 6 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xa2e5d0fcaebd3f22 expref: 55 pid: 11045 timeout: 2281 lvb_type: 0 LustreError: 5744:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff97b830b40000: operation mds_reint to node 0@lo failed: rc = -107 LustreError: 5739:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754679927 with bad export cookie 11738017787493001301 LustreError: Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff97b830b40000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff97b830b40000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 476062:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 476062:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 124 previous similar messages Lustre: lustre-MDT0000-mdc-ffff97b830b40000: Connection restored to (at 0@lo) Lustre: Skipped 1 previous similar message LustreError: 7922:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 000000004c941688 ns: mdt-lustre-MDT0000_UUID lock: ffff97b862d3c600/0xa2e5d0fcaebe4738 lrc: 3/0,0 mode: PR/PR res: [0x20000040c:0x6df:0x0].0x0 bits 0x1b/0x0 rrc: 8 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xa2e5d0fcaebe4715 expref: 30 pid: 7922 timeout: 0 lvb_type: 0 LustreError: 406582:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754680033 with bad export cookie 11738017787492119588 LustreError: lustre-MDT0000-mdc-ffff97b7f53ea800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 7922:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 1 previous similar message LustreError: 477047:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x20000040c:0x6df:0x0] error -108. Lustre: 334305:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040f:0x96:0x0] with magic=0xbd60bd0 Lustre: 334305:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 7 previous similar messages 8[484219]: segfault at 8 ip 00007fd248afb875 sp 00007ffd0fb4c2a0 error 4 in ld-2.28.so[7fd248ada000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 13[509941]: segfault at 0 ip 000055def5520b47 sp 00007fffaf419d40 error 6 in 13[55def551c000+7000] Code: Unable to access opcode bytes at RIP 0x55def5520b1d. Lustre: 6035:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6035:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 249 previous similar messages Lustre: 6035:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6035:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 249 previous similar messages Lustre: 6035:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6035:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 249 previous similar messages Lustre: 6035:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6035:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 249 previous similar messages Lustre: 6035:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6035:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 249 previous similar messages Lustre: 6035:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6035:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 249 previous similar messages | Link to test |
racer test 1: racer on clients: centos-80.localnet DURATION=2700 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 13 PID: 10526 Comm: ll_sa_9797 Kdump: loaded Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 e1 cd 87 ff 48 83 05 a9 53 ce 02 01 39 05 67 34 75 01 77 cf 48 83 05 a9 53 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 b4 53 ce 02 01 83 f8 01 74 2b 48 83 05 b7 53 ce 02 RSP: 0018:ffffb8b8ca93fe90 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008020001b RDX: 000000008020001c RSI: ffff9805e60114c8 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff980600352400 R11: ffff9805eb184000 R12: ffff9805e6011480 R13: ffff9806003524b8 R14: ffff9805e6011148 R15: ffff9805e60114c8 FS: 0000000000000000(0000) GS:ffff980772540000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000001ad3ae000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? update_load_avg+0x9f/0xa40 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) ec(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon i2c_piix4 pcspkr rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CR2: 0000000000000008 | Lustre: 7953:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff9805e4584980 x1839699794893312/t4294967609(0) o101->a1da610b-3c86-4862-9974-411f54883ecd@0@lo:217/0 lens 376/864 e 0 to 0 dl 1754474502 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 8390:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 516 < left 618, rollback = 7 Lustre: 8390:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8390:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8390:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 8390:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8390:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 | Link to test |
racer test 2: racer rename: trevis-79vm1.trevis.whamcloud.com,trevis-79vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 682438 Comm: ll_sa_682316 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.58.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 23 6c 83 75 5b c3 cc cc cc cc 48 89 df e8 85 0a af ff 39 05 73 90 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb27fc4797e08 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000010000d RDX: 000000000010000e RSI: ffff9c224620a770 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9c22303c5a00 R11: 0000000000000000 R12: ffff9c224620a490 R13: ffff9c22303c5a98 R14: ffff9c22303c5a00 R15: ffff9c22303c5aa8 FS: 0000000000000000(0000) GS:ffff9c22bcc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000000b2010002 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev pcspkr virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata virtio_net crc32c_intel serio_raw net_failover virtio_blk failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 426824:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000be8:0x1447:0x0]: rc = -5 LustreError: 426824:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 38 previous similar messages LustreError: 426824:0:(llite_lib.c:3770:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 426824:0:(llite_lib.c:3770:ll_prep_inode()) Skipped 38 previous similar messages LustreError: 447149:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff9c22048c9800: cannot apply new layout on [0x280000be7:0x11ec:0x0] : rc = -5 Lustre: dir [0x200000bea:0x1c1e:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 230 previous similar messages Lustre: dir [0x200000beb:0x37c4:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 10 previous similar messages Autotest: Test running for 280 minutes (lustre-reviews_review-dne-part-9_115559.34) | Link to test |
racer test 1: racer on clients: centos-20.localnet DURATION=2700 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 9 PID: 521092 Comm: ll_sa_521066 Kdump: loaded Tainted: G W O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 e1 cd 87 ff 48 83 05 a9 53 ce 02 01 39 05 67 34 75 01 77 cf 48 83 05 a9 53 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 b4 53 ce 02 01 83 f8 01 74 2b 48 83 05 b7 53 ce 02 RSP: 0018:ffffa20c15ed7e90 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000080200017 RDX: 0000000080200018 RSI: ffff9422b4651d08 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff94226af78800 R11: 0000000000009448 R12: ffff9422b4651cc0 R13: ffff94226af788b8 R14: ffff9422b4651988 R15: ffff9422b4651d08 FS: 0000000000000000(0000) GS:ffff9423f2440000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000145e38000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) ec(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CR2: 0000000000000008 | Lustre: 7897:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff942236501180 x1839614172631552/t4294967844(0) o101->e53a1ae6-ea19-4cfd-9f7c-8de4bb3a079f@0@lo:92/0 lens 376/816 e 0 to 0 dl 1754392837 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 traps: 4[9051] general protection fault ip:56480f8ba136 sp:7ffdb02f86a0 error:0 in 4[56480f8b5000+7000] Lustre: 6023:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 6023:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6023:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 6023:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6023:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6023:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6025:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 6025:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 6025:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6025:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6025:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 6025:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6025:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6025:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6025:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6025:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6025:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6025:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8272:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 8272:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 3 previous similar messages Lustre: 8272:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8272:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 8272:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8272:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 8272:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 8272:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 8272:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8272:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 8272:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8272:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 8582:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 516 < left 618, rollback = 7 Lustre: 8582:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 3 previous similar messages Lustre: 8582:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8582:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 8582:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8582:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 8582:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 8582:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 8582:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8582:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 8582:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8582:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 3 previous similar messages Lustre: 6025:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6025:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 5 previous similar messages Lustre: 6025:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6025:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 6025:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6025:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 6025:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6025:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 6025:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6025:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 6025:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6025:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 8456:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 546, rollback = 7 Lustre: 8456:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 9 previous similar messages Lustre: 8456:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8456:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 8456:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8456:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 8456:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/546/0, punch: 0/0/0, quota 4/150/0 Lustre: 8456:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 8456:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8456:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 8456:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8456:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 8272:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 8272:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 9 previous similar messages Lustre: 8272:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8272:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 8272:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 8272:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 8272:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 8272:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 8272:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8272:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 8272:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8272:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 7947:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0x791:0x0] with magic=0xbd60bd0 13[41302]: segfault at 0 ip 00005561d3e19b47 sp 00007ffed6fe1210 error 6 in 13[5561d3e15000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 8456:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 516 < left 618, rollback = 7 Lustre: 8456:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 31 previous similar messages Lustre: 8456:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8456:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 31 previous similar messages Lustre: 8456:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8456:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 31 previous similar messages Lustre: 8456:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 8456:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 31 previous similar messages Lustre: 8456:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8456:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 31 previous similar messages Lustre: 8456:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8456:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 31 previous similar messages 19[47056]: segfault at 8 ip 00007f323a917875 sp 00007ffd03178a90 error 4 in ld-2.28.so[7f323a8f6000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 6[47896]: segfault at 55d30c735000 ip 000055d30c735000 sp 00007ffd4bf93438 error 14 in 6[55d30c935000+1000] Code: Unable to access opcode bytes at RIP 0x55d30c734fd6. 17[50340]: segfault at 8 ip 00007f8522c5b875 sp 00007ffe3be4c1f0 error 4 in ld-2.28.so[7f8522c3a000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 traps: 0[54887] general protection fault ip:558fe12a1d40 sp:7fff5ec876a8 error:0 in 0[558fe129c000+7000] 12[59668]: segfault at ffffffff ip 00005626fe0e05c0 sp 00007fff11ca8318 error 6 in 12[5626fe0df000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 17604:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0x13c1:0x0] with magic=0xbd60bd0 Lustre: 17604:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: 13025:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x1141:0x0] with magic=0xbd60bd0 Lustre: 13025:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 10[69655]: segfault at 0 ip 000055abc3593b47 sp 00007ffd1fa86130 error 6 in 10[55abc358f000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 11[73977]: segfault at 5637d2104000 ip 00005637d2104000 sp 00007ffc55993498 error 14 in 11[5637d2304000+1000] Code: Unable to access opcode bytes at RIP 0x5637d2103fd6. Lustre: 6023:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6023:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 73 previous similar messages Lustre: 6023:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6023:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 73 previous similar messages Lustre: 6023:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6023:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 73 previous similar messages Lustre: 6023:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6023:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 73 previous similar messages Lustre: 6023:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6023:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 73 previous similar messages Lustre: 6023:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6023:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 73 previous similar messages Lustre: 7848:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0x1bbe:0x0] with magic=0xbd60bd0 Lustre: 7848:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 9[84889]: segfault at 0 ip 000055f6f944a200 sp 00007ffe812e6508 error 6 in 9[55f6f9448000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 5[93676]: segfault at 8 ip 00007f86c8b71875 sp 00007fffc424d820 error 4 in ld-2.28.so[7f86c8b50000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 5[96822]: segfault at 8 ip 00007f7c6f398875 sp 00007fffd2ebb5a0 error 4 in ld-2.28.so[7f7c6f377000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 5[97118]: segfault at 8 ip 00007fe085283875 sp 00007ffe50550990 error 4 in ld-2.28.so[7fe085262000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 5[96959]: segfault at 8 ip 00007f646ab19875 sp 00007ffc9c5e0a60 error 4 in ld-2.28.so[7f646aaf8000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 2[104472]: segfault at 0 ip 0000559079a79b47 sp 00007ffe72f9c470 error 6 in 2[559079a75000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 117475:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff942235ef2000: inode [0x200000402:0x27e1:0x0] mdc close failed: rc = -13 1[128812]: segfault at 0 ip 000055f230d95a88 sp 00007fffd900ae40 error 6 in 18[55f230d94000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 69912:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 69912:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 171 previous similar messages Lustre: 69912:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 69912:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 171 previous similar messages Lustre: 69912:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 69912:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 171 previous similar messages Lustre: 69912:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/78/0 Lustre: 69912:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 171 previous similar messages Lustre: 69912:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 69912:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 171 previous similar messages Lustre: 69912:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 69912:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 171 previous similar messages Lustre: mdt00_022: service thread pid 29322 was inactive for 40.078 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt00_022 state:I stack:0 pid:29322 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] ? mdt_object_find+0x106/0x480 [mdt] mdt_object_find_lock+0x72/0x1c0 [mdt] mdt_reint_setxattr+0x1ba/0x1830 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Lustre: mdt00_006: service thread pid 7848 was inactive for 44.087 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Lustre: mdt00_009: service thread pid 7947 was inactive for 43.904 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. task:mdt00_006 state:I task:mdt00_014 state:I stack:0 pid:10622 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_try+0xae/0x310 [mdt] mdt_getattr_name_lock+0x2249/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] Lustre: Skipped 2 previous similar messages stack:0 pid:7848 ppid:2 flags:0x80004080 ? _raw_read_unlock+0x12/0x30 Call Trace: __schedule+0x351/0xcb0 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] schedule+0xc0/0x180 ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] schedule_timeout+0xb4/0x190 tgt_enqueue+0xd0/0x300 [ptlrpc] ? __next_timer_interrupt+0x160/0x160 tgt_handle_request0+0x137/0xaf0 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] ? mdt_object_find+0x106/0x480 [mdt] ? lustre_msg_add_version+0x29/0xd0 [ptlrpc] mdt_object_find_lock+0x72/0x1c0 [mdt] mdt_reint_setxattr+0x1ba/0x1830 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] mdt_reint_internal+0x6a0/0xdc0 [mdt] kthread+0x1d1/0x200 mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] ? set_kthread_struct+0x70/0x70 kthread+0x1d1/0x200 ret_from_fork+0x1f/0x30 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 5711:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff942256360a00/0x95447d02dc64d586 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x2b41:0x0].0x0 bits 0x13/0x0 rrc: 11 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x95447d02dc64d56a expref: 1077 pid: 75697 timeout: 425 lvb_type: 0 Lustre: mdt00_022: service thread pid 29322 completed after 101.531s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_006: service thread pid 7848 completed after 101.444s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_019: service thread pid 17601 completed after 100.930s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_009: service thread pid 7947 completed after 101.263s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_010: service thread pid 7978 completed after 101.103s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_014: service thread pid 10622 completed after 100.811s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 8040:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754393190 with bad export cookie 10755859261302959255 Lustre: lustre-MDT0000-mdc-ffff942235ef2000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff942235ef2000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 133127:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff942235ef2000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 133018:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 133127:0:(mdc_request.c:1477:mdc_read_page()) Skipped 10 previous similar messages LustreError: 133018:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 15 previous similar messages LustreError: 133432:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff942235ef2000: inode [0x200000402:0x2cd0:0x0] mdc close failed: rc = -108 LustreError: 132712:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000401:0x2b41:0x0] error -108. Lustre: lustre-MDT0000-mdc-ffff942235ef2000: Connection restored to (at 0@lo) 17[138874]: segfault at 0 ip 0000561e74d3ab47 sp 00007ffe4e36e700 error 6 in 17[561e74d36000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 7[138996]: segfault at 0 ip 000055678250cb47 sp 00007ffd7769bc20 error 6 in 17[556782508000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 5723:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x331:0x0] with magic=0xbd60bd0 Lustre: 5723:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: 75697:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x3141:0x0] with magic=0xbd60bd0 Lustre: 75697:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message LustreError: 5711:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9421d6163400/0x95447d02dc720cf5 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x3227:0x0].0x0 bits 0x1b/0x0 rrc: 13 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x95447d02dc720cd9 expref: 222 pid: 8264 timeout: 583 lvb_type: 0 LustreError: 29322:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff94225e60d780 x1839614279091200/t0(0) o104->lustre-MDT0000@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: lustre-MDT0000-mdc-ffff942235ef2000: operation mds_sync to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff942235ef2000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 75697:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 000000005ec26324 ns: mdt-lustre-MDT0000_UUID lock: ffff9422a05cfc00/0x95447d02dc721108 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x3227:0x0].0x0 bits 0x20/0x0 rrc: 8 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x95447d02dc7210fa expref: 42 pid: 75697 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff942235ef2000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 152176:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 152158:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff942235ef2000: inode [0x200000401:0x3227:0x0] mdc close failed: rc = -108 LustreError: 152176:0:(llite_lib.c:2040:ll_md_setattr()) Skipped 2 previous similar messages LustreError: 152158:0:(file.c:248:ll_close_inode_openhandle()) Skipped 4 previous similar messages LustreError: 151617:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 151617:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 11 previous similar messages LustreError: 152413:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff942235ef2000: namespace resource [0x200000401:0x1:0x0].0x0 (ffff942263f50100) refcount nonzero (4) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff942235ef2000: Connection restored to (at 0@lo) Lustre: 8566:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 8566:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 77 previous similar messages Lustre: 8566:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8566:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 77 previous similar messages Lustre: 8566:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 8566:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 77 previous similar messages Lustre: 8566:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 8566:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 77 previous similar messages Lustre: 8566:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8566:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 77 previous similar messages Lustre: 8566:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8566:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 77 previous similar messages Lustre: lustre-OST0000-osc-ffff9421cc4f0000: disconnect after 21s idle LustreError: 5711:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff942299805a00/0x95447d02dc7669fa lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x3433:0x0].0x0 bits 0x1b/0x0 rrc: 15 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x95447d02dc76684f expref: 1230 pid: 10886 timeout: 711 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: Skipped 2 previous similar messages Lustre: lustre-MDT0000-mdc-ffff9421cc4f0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 158505:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x3433:0x0] error: rc = -5 LustreError: 158464:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000401:0x3433:0x0] error -5. LustreError: 158505:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 67 previous similar messages LustreError: 158464:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9421cc4f0000: inode [0x200000401:0x3433:0x0] mdc close failed: rc = -108 LustreError: 158464:0:(file.c:248:ll_close_inode_openhandle()) Skipped 7 previous similar messages LustreError: 158777:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff9421cc4f0000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 158777:0:(mdc_request.c:1477:mdc_read_page()) Skipped 23 previous similar messages LustreError: 158838:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff9421cc4f0000: namespace resource [0x200000007:0x1:0x0].0x0 (ffff94224c704500) refcount nonzero (2) after lock cleanup; forcing cleanup. LustreError: 158838:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff9421cc4f0000: Connection restored to (at 0@lo) 14[164438]: segfault at 0 ip 000055ca89c7ab47 sp 00007fffe69fc260 error 6 in 14[55ca89c76000+7000] Code: Unable to access opcode bytes at RIP 0x55ca89c7ab1d. Lustre: 13025:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x484:0x0] with magic=0xbd60bd0 Lustre: 13025:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 13[169761]: segfault at 0 ip 000055fcfb98b200 sp 00007fffc0889ee8 error 6 in 13[55fcfb989000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 171127:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff942235ef2000: inode [0x200000404:0x5d2:0x0] mdc close failed: rc = -13 LustreError: 171127:0:(file.c:248:ll_close_inode_openhandle()) Skipped 7 previous similar messages LustreError: 5711:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff94224c9c4000/0x95447d02dc7f02c4 lrc: 3/0,0 mode: CR/CR res: [0x200000404:0x5de:0x0].0x0 bits 0xa/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x95447d02dc7f0293 expref: 253 pid: 75697 timeout: 843 lvb_type: 0 LustreError: 7848:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 0000000094304e84 ns: mdt-lustre-MDT0000_UUID lock: ffff942280f97200/0x95447d02dc7f0d28 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x5de:0x0].0x0 bits 0x1b/0x0 rrc: 3 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x95447d02dc7f0d0c expref: 6 pid: 7848 timeout: 0 lvb_type: 0 LustreError: 5708:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754393608 with bad export cookie 10755859261309460771 LustreError: 7848:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff942235ef2000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff942235ef2000: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: lustre-MDT0000-mdc-ffff942235ef2000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 170987:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000404:0x5de:0x0] error -108. LustreError: 170987:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message LustreError: 170987:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000404:0x5de:0x0] error: rc = -108 LustreError: 170987:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 122 previous similar messages LustreError: 171232:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff942235ef2000: inode [0x200000404:0x53e:0x0] mdc close failed: rc = -108 LustreError: 171193:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff942235ef2000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 171193:0:(mdc_request.c:1477:mdc_read_page()) Skipped 10 previous similar messages Lustre: lustre-MDT0000-mdc-ffff942235ef2000: Connection restored to (at 0@lo) 8[186901]: segfault at 8 ip 00007fc1440c0875 sp 00007ffd0c0a8fc0 error 4 in ld-2.28.so[7fc14409f000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 5[200642]: segfault at 8 ip 00007f0f5f3e5875 sp 00007ffe3dcb9300 error 4 in ld-2.28.so[7f0f5f3c4000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 204611:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff942235ef2000: inode [0x200000406:0xa82:0x0] mdc close failed: rc = -13 LustreError: 204611:0:(file.c:248:ll_close_inode_openhandle()) Skipped 3 previous similar messages 12[208870]: segfault at 0 ip 00005581aed96b47 sp 00007ffc54331230 error 6 in 5[5581aed92000+7000] Code: Unable to access opcode bytes at RIP 0x5581aed96b1d. 3[212769]: segfault at 8 ip 00007f226d713875 sp 00007ffd41c5a7e0 error 4 in ld-2.28.so[7f226d6f2000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 8[212893]: segfault at 8 ip 00007f824319b875 sp 00007ffe3508c1b0 error 4 in ld-2.28.so[7f824317a000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 218091:0:(statahead.c:2457:start_statahead_thread()) lustre: unsupported statahead pattern 0X10. 12[221014]: segfault at 8 ip 00007f7a8346d875 sp 00007ffe403d1150 error 4 in ld-2.28.so[7f7a8344c000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 1[223364]: segfault at 8 ip 00007fd17ffb0875 sp 00007fffedc15e30 error 4 in ld-2.28.so[7fd17ff8f000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 4[224156]: segfault at 8 ip 00007f34d1339875 sp 00007ffe43b52ff0 error 4 in ld-2.28.so[7f34d1318000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 7776:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0x15a8:0x0] with magic=0xbd60bd0 Lustre: 7776:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 14[228310]: segfault at 0 ip 000055a329619b47 sp 00007ffc096df070 error 6 in 14[55a329615000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 hrtimer: interrupt took 4750643 ns Lustre: 7897:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000406:0x1d03:0x0] with magic=0xbd60bd0 Lustre: 7897:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages 12[271727]: segfault at 0 ip 000056539afa1b47 sp 00007fff76d79520 error 6 in 12[56539af9d000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 273492:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff942235ef2000: inode [0x200000405:0x2673:0x0] mdc close failed: rc = -13 LustreError: 273492:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message 0[273547]: segfault at 0 ip 000056305bf1db47 sp 00007ffd29ceb490 error 6 in 12[56305bf19000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 214362:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 214362:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 331 previous similar messages Lustre: 214362:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 214362:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 331 previous similar messages Lustre: 214362:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 214362:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 331 previous similar messages Lustre: 214362:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 214362:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 331 previous similar messages Lustre: 214362:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 214362:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 331 previous similar messages Lustre: 214362:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 214362:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 331 previous similar messages 10[284179]: segfault at 8 ip 00007fe040f42875 sp 00007ffe9abb7570 error 4 in ld-2.28.so[7fe040f21000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 1[284098]: segfault at 8 ip 00007fd2caa86875 sp 00007ffdf5141400 error 4 in ld-2.28.so[7fd2caa65000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 19[286653]: segfault at 8 ip 00007f109e0c7875 sp 00007ffd40d3e490 error 4 in ld-2.28.so[7f109e0a6000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 11[297022]: segfault at 8 ip 00007ff7ef133875 sp 00007ffe3918cde0 error 4 in ld-2.28.so[7ff7ef112000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 15[310266]: segfault at 0 ip 000056169f941b47 sp 00007ffe2f39f270 error 6 in 15[56169f93d000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 9[310866]: segfault at 0 ip 0000555780bd2b47 sp 00007ffcadd35ce0 error 6 in 17[555780bce000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 314433:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9421cc4f0000: inode [0x200000406:0x2ff5:0x0] mdc close failed: rc = -13 LustreError: 314433:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message Lustre: 7886:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000406:0x329f:0x0] with magic=0xbd60bd0 Lustre: 7886:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 5 previous similar messages 8[327580]: segfault at 8 ip 00007fc9fe574875 sp 00007ffe047d8230 error 4 in ld-2.28.so[7fc9fe553000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 8[327573]: segfault at 8 ip 00007fceba1e8875 sp 00007ffc1fd4fc60 error 4 in ld-2.28.so[7fceba1c7000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 7[329061]: segfault at 8 ip 00007f8e68712875 sp 00007ffd0dbe74c0 error 4 in ld-2.28.so[7f8e686f1000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 9[330061]: segfault at 8 ip 00007fb16553d875 sp 00007fffd4cd7ed0 error 4 in ld-2.28.so[7fb16551c000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 13[335126]: segfault at 8 ip 00007f62b8c62875 sp 00007ffc6f5f26f0 error 4 in ld-2.28.so[7f62b8c41000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 5711:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9422a30fea00/0x95447d02dcf329f2 lrc: 3/0,0 mode: PR/PR res: [0x200000406:0x3ab0:0x0].0x0 bits 0x1b/0x0 rrc: 20 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x95447d02dcf329cf expref: 1609 pid: 75694 timeout: 1387 lvb_type: 0 LustreError: 7947:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 0000000061873d78 ns: mdt-lustre-MDT0000_UUID lock: ffff9422989d5400/0x95447d02dcf32d2c lrc: 3/0,0 mode: PR/PR res: [0x200000406:0x3ab0:0x0].0x0 bits 0x20/0x0 rrc: 16 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x95447d02dcf32cf4 expref: 352 pid: 7947 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9421cc4f0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 4 previous similar messages LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 339982:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000406:0x3ab0:0x0] error: rc = -5 LustreError: 339982:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 16 previous similar messages LustreError: 339899:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 340104:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9421cc4f0000: inode [0x200000405:0x3c5b:0x0] mdc close failed: rc = -108 LustreError: 340104:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message LustreError: 340104:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff9421cc4f0000: namespace resource [0x200000401:0x1:0x0].0x0 (ffff94224c704100) refcount nonzero (2) after lock cleanup; forcing cleanup. LustreError: 340104:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff9421cc4f0000: Connection restored to (at 0@lo) ptlrpc_watchdog_fire: 2 callbacks suppressed Lustre: mdt_io00_001: service thread pid 5735 was inactive for 40.519 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Lustre: Skipped 1 previous similar message task:mdt_io00_001 state:I stack:0 pid:5735 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_check_lock+0x24f/0x4d0 [mdt] mdt_reint_rename+0x1835/0x34e0 [mdt] ? lustre_pack_reply_v2+0x1b0/0x380 [ptlrpc] ? ucred_set_audit_enabled.isra.12+0x10/0xa0 [mdt] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 INFO: task mcreate:339937 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mcreate state:D stack:0 pid:339937 ppid:7609 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? __might_sleep+0x59/0xc0 down_write+0x80/0xd0 filename_create+0x92/0x220 do_mknodat+0x105/0x300 __x64_sys_mknod+0x23/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7f40c175d041 Code: Unable to access opcode bytes at RIP 0x7f40c175d017. RSP: 002b:00007ffdc282cc48 EFLAGS: 00000246 ORIG_RAX: 0000000000000085 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f40c175d041 RDX: 0000000000000000 RSI: 00000000000081a4 RDI: 00007ffdc282deb5 RBP: 00007ffdc282deb5 R08: 00007ffdc282deb5 R09: 0000000000000000 R10: fffffffffffff5cb R11: 0000000000000246 R12: 0000000000000001 R13: 00007ffdc282ce38 R14: 00007ffdc282cc70 R15: fffff00000000000 INFO: task mcreate:339940 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mcreate state:D stack:0 pid:339940 ppid:7643 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? __might_sleep+0x59/0xc0 down_write+0x80/0xd0 filename_create+0x92/0x220 do_mknodat+0x105/0x300 __x64_sys_mknod+0x23/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7ffa4986d041 Code: Unable to access opcode bytes at RIP 0x7ffa4986d017. RSP: 002b:00007ffeb0174aa8 EFLAGS: 00000246 ORIG_RAX: 0000000000000085 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ffa4986d041 RDX: 0000000000000000 RSI: 00000000000081a4 RDI: 00007ffeb0175eb5 RBP: 00007ffeb0175eb5 R08: 00007ffeb0175eb5 R09: 0000000000000000 R10: fffffffffffff5cb R11: 0000000000000246 R12: 0000000000000001 R13: 00007ffeb0174c98 R14: 00007ffeb0174ad0 R15: fffff00000000000 INFO: task file_concat.sh:339953 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:file_concat.sh state:D stack:0 pid:339953 ppid:7605 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? lprocfs_counter_add+0x15b/0x210 [obdclass] down_write+0x80/0xd0 do_last+0x2eb/0xfc0 ? nd_jump_root+0xe5/0x160 ? path_init+0x437/0x520 path_openat+0xf7/0x500 do_filp_open+0x99/0x140 ? getname_flags+0x6e/0x330 ? __check_object_size+0xff/0x256 ? do_raw_spin_unlock+0x75/0x190 ? _raw_spin_unlock+0x12/0x30 do_sys_openat2+0x2b4/0x410 do_sys_open+0x73/0xa0 __x64_sys_openat+0x24/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7effef6ff332 Code: Unable to access opcode bytes at RIP 0x7effef6ff308. RSP: 002b:00007ffdf0a5e970 EFLAGS: 00000246 ORIG_RAX: 0000000000000101 RAX: ffffffffffffffda RBX: 000055d8da2c30b0 RCX: 00007effef6ff332 RDX: 0000000000000441 RSI: 000055d8da2bd210 RDI: 00000000ffffff9c RBP: 00007ffdf0a5ea70 R08: 0000000000000020 R09: 000055d8da2a3010 R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000000003 R13: 0000000000000001 R14: 0000000000000001 R15: 000055d8da2bd210 INFO: task mrename:339967 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mrename state:D stack:0 pid:339967 ppid:7635 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 down_write+0x80/0xd0 lock_rename+0x144/0x160 do_renameat2+0x313/0x730 __x64_sys_rename+0x24/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7f4657c196cb Code: Unable to access opcode bytes at RIP 0x7f4657c196a1. RSP: 002b:00007ffd75a40778 EFLAGS: 00000206 ORIG_RAX: 0000000000000052 RAX: ffffffffffffffda RBX: 00007ffd75a40868 RCX: 00007f4657c196cb RDX: 00007ffd75a40888 RSI: 00007ffd75a42ec6 RDI: 00007ffd75a42eb0 RBP: 0000000000400800 R08: 00007f4657f68d20 R09: 00007f4657f68d20 R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000400710 R13: 00007ffd75a40860 R14: 0000000000000000 R15: 0000000000000000 INFO: task mcreate:340008 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mcreate state:D stack:0 pid:340008 ppid:7763 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? __might_sleep+0x59/0xc0 down_write+0x80/0xd0 filename_create+0x92/0x220 do_mknodat+0x105/0x300 __x64_sys_mknod+0x23/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7f5b64284041 Code: Unable to access opcode bytes at RIP 0x7f5b64284017. RSP: 002b:00007ffeba110ee8 EFLAGS: 00000246 ORIG_RAX: 0000000000000085 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f5b64284041 RDX: 0000000000000000 RSI: 00000000000081a4 RDI: 00007ffeba112eb5 RBP: 00007ffeba112eb5 R08: 00007ffeba112eb5 R09: 0000000000000000 R10: fffffffffffff5cb R11: 0000000000000246 R12: 0000000000000001 R13: 00007ffeba1110d8 R14: 00007ffeba110f10 R15: fffff00000000000 INFO: task mcreate:340020 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mcreate state:D stack:0 pid:340020 ppid:7706 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? __might_sleep+0x59/0xc0 down_write+0x80/0xd0 filename_create+0x92/0x220 do_mknodat+0x105/0x300 __x64_sys_mknod+0x23/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7ff89e20f041 Code: Unable to access opcode bytes at RIP 0x7ff89e20f017. RSP: 002b:00007ffc128e07d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000085 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff89e20f041 RDX: 0000000000000000 RSI: 00000000000081a4 RDI: 00007ffc128e1eb5 RBP: 00007ffc128e1eb5 R08: 00007ffc128e1eb5 R09: 0000000000000000 R10: fffffffffffff5cb R11: 0000000000000246 R12: 0000000000000001 R13: 00007ffc128e09c8 R14: 00007ffc128e0800 R15: fffff00000000000 INFO: task mcreate:340021 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mcreate state:D stack:0 pid:340021 ppid:7729 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? __might_sleep+0x59/0xc0 down_write+0x80/0xd0 filename_create+0x92/0x220 do_mknodat+0x105/0x300 __x64_sys_mknod+0x23/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7f51920d1041 Code: Unable to access opcode bytes at RIP 0x7f51920d1017. RSP: 002b:00007fffd5a5b198 EFLAGS: 00000246 ORIG_RAX: 0000000000000085 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f51920d1041 RDX: 0000000000000000 RSI: 00000000000081a4 RDI: 00007fffd5a5ceb5 RBP: 00007fffd5a5ceb5 R08: 00007fffd5a5ceb5 R09: 0000000000000000 R10: fffffffffffff5cb R11: 0000000000000246 R12: 0000000000000001 R13: 00007fffd5a5b388 R14: 00007fffd5a5b1c0 R15: fffff00000000000 INFO: task mrename:340023 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mrename state:D stack:0 pid:340023 ppid:7716 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 down_write+0x80/0xd0 lock_rename+0x144/0x160 do_renameat2+0x313/0x730 __x64_sys_rename+0x24/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7f41ba4bf6cb Code: Unable to access opcode bytes at RIP 0x7f41ba4bf6a1. RSP: 002b:00007fff3fbdec78 EFLAGS: 00000206 ORIG_RAX: 0000000000000052 RAX: ffffffffffffffda RBX: 00007fff3fbded68 RCX: 00007f41ba4bf6cb RDX: 00007fff3fbded88 RSI: 00007fff3fbdfec7 RDI: 00007fff3fbdfeb2 RBP: 0000000000400800 R08: 00007f41ba80ed20 R09: 00007f41ba80ed20 R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000400710 R13: 00007fff3fbded60 R14: 0000000000000000 R15: 0000000000000000 INFO: task rm:340043 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:rm state:D stack:0 pid:340043 ppid:7744 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 down_write+0x80/0xd0 do_unlinkat+0x184/0x460 __x64_sys_unlinkat+0x4c/0xa0 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7f7638c9813b Code: Unable to access opcode bytes at RIP 0x7f7638c98111. RSP: 002b:00007fffb017dde8 EFLAGS: 00000256 ORIG_RAX: 0000000000000107 RAX: ffffffffffffffda RBX: 0000561c39e78700 RCX: 00007f7638c9813b RDX: 0000000000000000 RSI: 0000561c39e774d0 RDI: 00000000ffffff9c RBP: 0000561c39e77440 R08: 0000000000000003 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000256 R12: 00007fffb017dfd0 R13: 0000000000000000 R14: 0000561c39e78700 R15: 0000000000000000 Lustre: lustre-OST0000-osc-ffff942235ef2000: disconnect after 22s idle Lustre: Skipped 1 previous similar message LustreError: 5711:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff94225431dc00/0x95447d02dcf349a3 lrc: 3/0,0 mode: PR/PR res: [0x200000406:0x3ab0:0x0].0x0 bits 0x13/0x0 rrc: 7 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x95447d02dcf34995 expref: 1449 pid: 29322 timeout: 1489 lvb_type: 0 LustreError: 7978:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 00000000f29a19f1 ns: mdt-lustre-MDT0000_UUID lock: ffff94223b324400/0x95447d02dcf34e5e lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x1:0x0].0x0 bits 0x13/0x0 rrc: 19 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x95447d02dcf34e50 expref: 43 pid: 7978 timeout: 0 lvb_type: 0 Lustre: mdt_io00_001: service thread pid 5735 completed after 101.985s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 7965:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754394255 with bad export cookie 10755859261310308828 Lustre: lustre-MDT0000-mdc-ffff942235ef2000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: 7978:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: lustre-MDT0000-mdc-ffff942235ef2000: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: lustre-MDT0000-mdc-ffff942235ef2000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: Skipped 3 previous similar messages LustreError: 340247:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x1:0x0] error: rc = -5 LustreError: 340247:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 46 previous similar messages LustreError: 340184:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 Lustre: lustre-MDT0000-mdc-ffff942235ef2000: Connection restored to (at 0@lo) 16[341582]: segfault at e2eb61f0 ip 000055a95f42d1bc sp 00007fffbd74b7e0 error 6 in 16[55a95f427000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0b 00 <00> 00 00 00 34 00 00 00 fc 00 00 00 88 bd ff ff 7c 00 00 00 00 45 LustreError: 5711:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9421d616d000/0x95447d02dcf5acd3 lrc: 3/0,0 mode: PR/PR res: [0x200000407:0xe9:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x95447d02dcf5abd7 expref: 78 pid: 7848 timeout: 1600 lvb_type: 0 LustreError: 5723:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 00000000e93c9105 ns: mdt-lustre-MDT0000_UUID lock: ffff94224974f800/0x95447d02dcf5bbcf lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x35:0x0].0x0 bits 0x13/0x0 rrc: 5 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x95447d02dcf5bbc1 expref: 5 pid: 5723 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9421cc4f0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 8846:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754394366 with bad export cookie 10755859261317921041 LustreError: 8846:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) Skipped 1 previous similar message LustreError: 5723:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 6 previous similar messages LustreError: Skipped 6 previous similar messages LustreError: 343587:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000408:0x35:0x0] error: rc = -5 LustreError: 343503:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 343587:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 49 previous similar messages LustreError: 343783:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff9421cc4f0000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 343783:0:(mdc_request.c:1477:mdc_read_page()) Skipped 5 previous similar messages Lustre: lustre-MDT0000-mdc-ffff9421cc4f0000: Connection restored to (at 0@lo) 7[350023]: segfault at 8 ip 00007f1eefb13875 sp 00007ffccc775aa0 error 4 in ld-2.28.so[7f1eefaf2000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 ODEBUG: object 000000003b1e3957 is on stack 00000000b06ca6eb, but NOT annotated. WARNING: CPU: 7 PID: 8178 at lib/debugobjects.c:368 __debug_object_init.cold.5+0x35/0x15f Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) ec(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CPU: 7 PID: 8178 Comm: mdt00_011 Kdump: loaded Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:__debug_object_init.cold.5+0x35/0x15f Code: 9e b5 48 83 05 33 38 0c 03 01 89 05 69 40 0c 03 65 48 8b 04 25 00 dd 01 00 48 8b 50 18 e8 43 87 99 ff 48 83 05 2b 38 0c 03 01 <0f> 0b 48 83 05 29 38 0c 03 01 48 83 05 29 38 0c 03 01 e9 7f ee ff RSP: 0018:ffffa20c088ab4a0 EFLAGS: 00010002 RAX: 0000000000000050 RBX: ffffa20c088ab5a8 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffff9423f23de5a8 RDI: ffff9423f23de5a8 RBP: ffffffffb6105ca0 R08: 0000000000000000 R09: c0000000ffff7fff R10: 0000000000000001 R11: ffffa20c088ab298 R12: ffffffffb79330c8 R13: 000000000004bd60 R14: ffffffffb79330c0 R15: ffff9422172576e0 FS: 0000000000000000(0000) GS:ffff9423f23c0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f76dfb0a4c0 CR3: 00000001a4ca5000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __warn+0xc8/0x150 ? __debug_object_init.cold.5+0x35/0x15f ? report_bug+0x113/0x140 ? do_error_trap+0xb6/0x130 ? do_invalid_op+0x46/0x60 ? __debug_object_init.cold.5+0x35/0x15f ? invalid_op+0x14/0x20 ? __debug_object_init.cold.5+0x35/0x15f ? lod_set_pool+0x270/0x270 [lod] debug_object_init+0x22/0x30 init_timer_key+0x28/0x120 lod_ost_alloc_qos+0x770/0x1c30 [lod] ? __might_sleep+0x59/0xc0 ? slab_post_alloc_hook+0x66/0x380 ? lod_qos_prep_create+0x390/0x1be0 [lod] ? __kmalloc+0x1b4/0x4a0 lod_qos_prep_create+0x1378/0x1be0 [lod] lod_prepare_create+0x204/0x460 [lod] ? osd_declare_create+0x4a2/0x7a0 [osd_ldiskfs] lod_declare_striped_create+0x270/0xf80 [lod] ? lod_sub_declare_create+0x111/0x320 [lod] lod_declare_create+0x3d4/0x9c0 [lod] ? osd_xattr_get+0x274/0x940 [osd_ldiskfs] mdd_declare_create_object_internal+0x107/0x4a0 [mdd] ? lod_alloc_comp_entries+0x2a7/0x650 [lod] mdd_declare_create_object.isra.25+0x55/0xc40 [mdd] mdd_declare_create+0x6a/0x6c0 [mdd] mdd_create+0x5bd/0x1d00 [mdd] ? mdt_version_save+0xa8/0x210 [mdt] mdt_reint_open+0x337c/0x3c10 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_intent_open+0x180/0x5b0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_fixup_resent+0x2e0/0x2e0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ---[ end trace a430607e20346d95 ]--- LustreError: 5711:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9422888c8200/0x95447d02dcfd0a65 lrc: 3/0,0 mode: PR/PR res: [0x200000409:0x35e:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x95447d02dcfd09ee expref: 160 pid: 29322 timeout: 1733 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9421cc4f0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 354141:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000409:0x35e:0x0] error: rc = -5 LustreError: 354141:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 16 previous similar messages LustreError: 354353:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9421cc4f0000: inode [0x200000408:0x41c:0x0] mdc close failed: rc = -108 LustreError: 354353:0:(file.c:248:ll_close_inode_openhandle()) Skipped 25 previous similar messages LustreError: 354288:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff9421cc4f0000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 354288:0:(mdc_request.c:1477:mdc_read_page()) Skipped 9 previous similar messages Lustre: lustre-MDT0000-mdc-ffff9421cc4f0000: Connection restored to (at 0@lo) Lustre: 8582:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 8582:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 197 previous similar messages Lustre: 8582:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8582:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 197 previous similar messages Lustre: 8582:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8582:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 197 previous similar messages Lustre: 8582:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 8582:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 197 previous similar messages Lustre: 8582:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8582:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 197 previous similar messages Lustre: 8582:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8582:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 197 previous similar messages 13[358363]: segfault at 8 ip 00007fd5e1189875 sp 00007ffc1dfbc860 error 4 in ld-2.28.so[7fd5e1168000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 9[359163]: segfault at 0 ip 00005572348f23a0 sp 00007ffcbaa5ca78 error 6 in 9[5572348f0000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 15[361749]: segfault at 0 ip 000055ac418d0b47 sp 00007ffdd1531200 error 6 in 15[55ac418cc000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 19[362214]: segfault at 0 ip 0000561805449b47 sp 00007ffef50e8a70 error 6 in 16[561805445000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 10722:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040a:0x2a3:0x0] with magic=0xbd60bd0 Lustre: 10722:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages 13[370691]: segfault at 0 ip 0000558b5103db47 sp 00007fff40f32300 error 6 in 13[558b51039000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 4[386566]: segfault at 8 ip 00007f3da69fe875 sp 00007fffbb5130b0 error 4 in ld-2.28.so[7f3da69dd000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 11[387687]: segfault at 8 ip 00007f94fc19a875 sp 00007ffced5f1b70 error 4 in ld-2.28.so[7f94fc179000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 5711:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff94224c645000/0x95447d02dd19a0ad lrc: 3/0,0 mode: CR/CR res: [0x20000040a:0xdc2:0x0].0x0 bits 0xa/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x95447d02dd19a09f expref: 466 pid: 17604 timeout: 1958 lvb_type: 0 LustreError: 7886:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 00000000eb0a41c6 ns: mdt-lustre-MDT0000_UUID lock: ffff9422a8df9e00/0x95447d02dd19a4ab lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x1:0x0].0x0 bits 0x13/0x0 rrc: 22 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x95447d02dd19a496 expref: 168 pid: 7886 timeout: 0 lvb_type: 0 LustreError: 7886:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9421cc4f0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 5 previous similar messages LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 394792:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x1:0x0] error: rc = -5 LustreError: 394792:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 20 previous similar messages LustreError: 394534:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x20000040a:0xdc2:0x0] error -108. Lustre: lustre-MDT0000-mdc-ffff9421cc4f0000: Connection restored to (at 0@lo) 14[408209]: segfault at 8 ip 00007f1ea7c07875 sp 00007ffec632d140 error 4 in ld-2.28.so[7f1ea7be6000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 5[410350]: segfault at 8 ip 00007f578fa44875 sp 00007ffd11716890 error 4 in ld-2.28.so[7f578fa23000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 5[425920]: segfault at 8 ip 00007fb0b8b51875 sp 00007ffecf8152c0 error 4 in ld-2.28.so[7fb0b8b30000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 18[436501]: segfault at 0 ip 0000557356a2fb47 sp 00007ffd5d4f5230 error 6 in 4 (deleted)[557356a2b000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: lustre-OST0002-osc-ffff942235ef2000: disconnect after 20s idle LustreError: 5711:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff94228024ae00/0x95447d02dd3bfd84 lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x24f9:0x0].0x0 bits 0x1b/0x0 rrc: 5 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x95447d02dd3bfcb2 expref: 583 pid: 75696 timeout: 2185 lvb_type: 0 LustreError: 5706:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754394952 with bad export cookie 10755859261320440705 LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: operation mds_getxattr to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9421cc4f0000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 4 previous similar messages LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 445076:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff9421cc4f0000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 445076:0:(mdc_request.c:1477:mdc_read_page()) Skipped 3 previous similar messages LustreError: 444525:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 445164:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff9421cc4f0000: namespace resource [0x200000007:0x1:0x0].0x0 (ffff9421d63f3700) refcount nonzero (4) after lock cleanup; forcing cleanup. LustreError: 444525:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 108 previous similar messages LustreError: 445164:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 3 previous similar messages Lustre: lustre-MDT0000-mdc-ffff9421cc4f0000: Connection restored to (at 0@lo) 3[445923]: segfault at 0 ip 0000557e18873b47 sp 00007fffeddfb6b0 error 6 in 3 (deleted)[557e1886f000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2[461236]: segfault at 0 ip 000055fde97f9b47 sp 00007fff8eeab910 error 6 in 2[55fde97f5000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 19[462739]: segfault at 55ec626e6000 ip 000055ec626e6000 sp 00007fffd7c5b140 error 14 in 19[55ec628e6000+1000] Code: Unable to access opcode bytes at RIP 0x55ec626e5fd6. 7[466537]: segfault at 8 ip 00007f72cb343875 sp 00007ffc3c747370 error 4 in ld-2.28.so[7f72cb322000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 12[478720]: segfault at 8 ip 00007fcdf41dc875 sp 00007ffd04f68d60 error 4 in ld-2.28.so[7fcdf41bb000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 9[483739]: segfault at 8 ip 00007feffd320875 sp 00007fff91615f50 error 4 in ld-2.28.so[7feffd2ff000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 traps: 0[484608] trap invalid opcode ip:55f4bc04b70c sp:7ffcf0ef1388 error:0 in 0[55f4bc045000+7000] 4[489108]: segfault at 0 ip 0000560447022b47 sp 00007fff0e8bbec0 error 6 in 4[56044701e000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 11[492501]: segfault at 8 ip 00007fc5b163b875 sp 00007ffe43e8d7a0 error 4 in ld-2.28.so[7fc5b161a000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: operation mds_reint to node 0@lo failed: rc = -107 LustreError: 5708:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754395169 with bad export cookie 10755859261322697001 LustreError: lustre-MDT0000-mdc-ffff9421cc4f0000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: Skipped 1 previous similar message LustreError: 495627:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 495881:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9421cc4f0000: inode [0x20000040c:0x11f4:0x0] mdc close failed: rc = -108 LustreError: 495881:0:(file.c:248:ll_close_inode_openhandle()) Skipped 34 previous similar messages Lustre: 8456:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 8456:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 357 previous similar messages Lustre: 8456:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8456:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 357 previous similar messages Lustre: 8456:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8456:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 357 previous similar messages Lustre: 8456:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 8456:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 357 previous similar messages Lustre: 8456:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8456:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 357 previous similar messages Lustre: 8456:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8456:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 357 previous similar messages 2[496831]: segfault at 0 ip 0000559fa5227b47 sp 00007fffb6f98da0 error 6 in 2[559fa5223000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 1[503300]: segfault at 0 ip 00005611c6757b47 sp 00007ffc423b3a70 error 6 in 1[5611c6753000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 2[505215]: segfault at 0 ip 0000562de2460b47 sp 00007ffde2eb8b30 error 6 in 18[562de245c000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 traps: 10[511792] trap invalid opcode ip:55afa6241c1b sp:7ffc77ad4030 error:0 in 10[55afa623f000+7000] | Link to test |
racer test 1: racer on clients: centos-100.localnet DURATION=2700 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 12 PID: 231511 Comm: ll_sa_230291 Kdump: loaded Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 e1 cd 87 ff 48 83 05 a9 53 ce 02 01 39 05 67 34 75 01 77 cf 48 83 05 a9 53 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 b4 53 ce 02 01 83 f8 01 74 2b 48 83 05 b7 53 ce 02 RSP: 0018:ffffa33ec5f13e90 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008020001d RDX: 000000008020001e RSI: ffff9604dd238448 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9605467f6c00 R11: 0000000000007c78 R12: ffff9604dd238400 R13: ffff9605467f6cb8 R14: ffff9604dd2380c8 R15: ffff9604dd238448 FS: 0000000000000000(0000) GS:ffff960672500000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000011f6ab000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? update_load_avg+0x9f/0xa40 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) ec(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) pcspkr virtio_balloon i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CR2: 0000000000000008 | Lustre: 10390:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-MDT0002: opcode 2: before 510 < left 1072, rollback = 2 Lustre: 10390:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/2, destroy: 1/4/0 Lustre: 10390:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 13/1072/0 Lustre: 10390:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 1/3/0 Lustre: 10390:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 14/287/4, delete: 2/5/0 Lustre: 10390:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 7/7/0, ref_del: 2/2/0 Lustre: 10859:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-MDT0002: opcode 2: before 506 < left 699, rollback = 2 Lustre: 10859:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 23 previous similar messages Lustre: 10859:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/8, destroy: 0/0/0 Lustre: 10859:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 10859:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 9/699/0 Lustre: 10859:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 10859:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 4/54/0 Lustre: 10859:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 10859:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 10/203/2, delete: 0/0/0 Lustre: 10859:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 10859:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 4/4/0, ref_del: 0/0/0 Lustre: 10859:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 11486:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-MDT0001: opcode 2: before 500 < left 699, rollback = 2 Lustre: 11486:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 7 previous similar messages Lustre: 11486:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/12, destroy: 0/0/0 Lustre: 11486:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 11486:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 9/699/0 Lustre: 11486:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 11486:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 4/54/0 Lustre: 11486:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 11486:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 11/239/4, delete: 0/0/0 Lustre: 11486:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 11486:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 5/5/0, ref_del: 0/0/0 Lustre: 11486:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 10838:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 9867:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff960508814980 x1839216679845248/t4294968021(0) o101->226463e2-4e0d-4ba7-a525-f6be5096dd9a@0@lo:45/0 lens 376/864 e 0 to 0 dl 1754013780 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 7536:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 516 < left 618, rollback = 7 Lustre: 7536:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 3 previous similar messages 14[13741]: segfault at 8 ip 00007f0e65cb2875 sp 00007fff6096f420 error 4 in ld-2.28.so[7f0e65c91000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 12549:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-MDT0002: opcode 2: before 502 < left 877, rollback = 2 Lustre: 12549:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 93 previous similar messages Lustre: 12549:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/11, destroy: 0/0/0 Lustre: 12549:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 99 previous similar messages Lustre: 12549:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 11/877/0 Lustre: 12549:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 99 previous similar messages Lustre: 12549:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 4/54/0 Lustre: 12549:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 99 previous similar messages Lustre: 12549:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 10/203/3, delete: 0/0/0 Lustre: 12549:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 99 previous similar messages Lustre: 12549:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 4/4/0, ref_del: 0/0/0 Lustre: 12549:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 99 previous similar messages Lustre: 11350:0:(mdd_dir.c:4812:mdd_migrate_object()) lustre-MDD0001: [0x280000403:0x1:0x0]/14 is open, migrate only dentry Lustre: 6753:0:(out_handler.c:879:out_tx_end()) lustre-MDT0002-osd: error during execution of #0 from /home/green/git/lustre-release/lustre/ptlrpc/../../lustre/target/out_handler.c:562: rc = -2 LustreError: 11350:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0001: migrate [0x280000403:0x1:0x0]/14 failed: rc = -2 Lustre: 12854:0:(mdd_dir.c:4812:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x26:0x0]/16 is open, migrate only dentry LustreError: 13724:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0002: migrate [0x280000403:0x1:0x0]/7 failed: rc = -71 LustreError: 13724:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 1 previous similar message Lustre: 7537:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 522, rollback = 7 Lustre: 7537:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 6747:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff9604ca259180 x1839216682281472/t4294969806(0) o101->226463e2-4e0d-4ba7-a525-f6be5096dd9a@0@lo:49/0 lens 376/816 e 0 to 0 dl 1754013784 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 LustreError: 6760:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0001: migrate [0x200000404:0x77:0x0]/20 failed: rc = -2 LustreError: 6760:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 1 previous similar message LustreError: 14234:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) lustre-MDD0002: '17' migration was interrupted, run 'lfs migrate -m 1 -c 2 -H crush 17' to finish migration: rc = -1 Lustre: 13963:0:(mdd_dir.c:4812:mdd_migrate_object()) lustre-MDD0002: [0x280000403:0x1:0x0]/13 is open, migrate only dentry LustreError: 7622:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0001: [0x240000404:0x4a:0x0] migrate mdt count mismatch 2 != 1 12[15567]: segfault at 8 ip 00007f1c2041a875 sp 00007fff462cacd0 error 4 in ld-2.28.so[7f1c203f9000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 12[16938]: segfault at 8 ip 00007f3472830875 sp 00007ffc69435010 error 4 in ld-2.28.so[7f347280f000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 14398:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-MDT0000: opcode 2: before 509 < left 1233, rollback = 2 Lustre: 14398:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 105 previous similar messages Lustre: 14398:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/4, destroy: 0/0/0 Lustre: 14398:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 109 previous similar messages Lustre: 14398:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 15/1233/0 Lustre: 14398:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 109 previous similar messages Lustre: 14398:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 4/54/0 Lustre: 14398:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 109 previous similar messages Lustre: 14398:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 11/239/3, delete: 0/0/0 Lustre: 14398:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 109 previous similar messages Lustre: 14398:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 5/5/0, ref_del: 0/0/0 Lustre: 14398:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 109 previous similar messages Lustre: 13332:0:(mdd_dir.c:4812:mdd_migrate_object()) lustre-MDD0002: [0x280000403:0x1:0x0]/11 is open, migrate only dentry Lustre: 13332:0:(mdd_dir.c:4812:mdd_migrate_object()) Skipped 1 previous similar message LustreError: 6761:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000404:0xe8:0x0]/6 failed: rc = -2 LustreError: 6761:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 2 previous similar messages Lustre: 14480:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 14480:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 17021:0:(mdd_dir.c:4812:mdd_migrate_object()) lustre-MDD0001: [0x240000404:0x40:0x0]/13 is open, migrate only dentry Lustre: 17021:0:(mdd_dir.c:4812:mdd_migrate_object()) Skipped 4 previous similar messages 12[18613]: segfault at 8 ip 00007fcd6a0f1875 sp 00007fffd6f043c0 error 4 in ld-2.28.so[7fcd6a0d0000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 14398:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-MDT0002: opcode 2: before 508 < left 966, rollback = 2 Lustre: 14398:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 167 previous similar messages Lustre: 14398:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/6, destroy: 0/0/0 Lustre: 14398:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 169 previous similar messages Lustre: 14398:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 12/966/0 Lustre: 14398:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 169 previous similar messages Lustre: 14398:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 4/54/0 Lustre: 14398:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 169 previous similar messages Lustre: 14398:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 11/239/2, delete: 0/0/0 Lustre: 14398:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 169 previous similar messages Lustre: 14398:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 5/5/0, ref_del: 0/0/0 Lustre: 14398:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 169 previous similar messages LustreError: 13533:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0001: migrate [0x280000404:0x57:0x0]/6 failed: rc = -2 2[20521]: segfault at 8 ip 00007fe9a8828875 sp 00007ffe18137500 error 4 in ld-2.28.so[7fe9a8807000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 10390:0:(lustre_lmv.h:500:lmv_is_sane()) unknown layout LMV: magic=0xcd40cd0 count=3 index=2 hash=crush:0x82000003 version=1 migrate_offset=2 migrate_hash=fnv_1a_64:2 pool= LustreError: 10656:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000404:0x1:0x0]: rc = -2 LustreError: 16038:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff960478ebb800: inode [0x240000404:0x1:0x0] mdc close failed: rc = -2 Lustre: 21063:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 588, rollback = 7 Lustre: 21063:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 19273:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff960499863480 x1839216688363264/t4294972165(0) o101->226463e2-4e0d-4ba7-a525-f6be5096dd9a@0@lo:66/0 lens 376/840 e 0 to 0 dl 1754013801 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 LustreError: 10390:0:(lustre_lmv.h:500:lmv_is_sane()) unknown layout LMV: magic=0xcd40cd0 count=4 index=3 hash=crush:0x82000003 version=1 migrate_offset=3 migrate_hash=fnv_1a_64:2 pool= Lustre: 11350:0:(mdd_dir.c:4812:mdd_migrate_object()) lustre-MDD0002: [0x280000401:0x2:0x0]/18 is open, migrate only dentry LustreError: 19277:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0000: [0x200000403:0x191:0x0] migrate mdt count mismatch 3 != 1 LustreError: 13533:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) lustre-MDD0002: '5' migration was interrupted, run 'lfs migrate -m 1 -c 2 -H crush 5' to finish migration: rc = -1 LustreError: 13533:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000403:0x2:0x0]/5 failed: rc = -1 LustreError: 13533:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 5 previous similar messages LustreError: 19439:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000403:0x16a:0x0]: rc = -5 LustreError: 19439:0:(llite_lib.c:3787:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 13839:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) lustre-MDD0001: '6' migration was interrupted, run 'lfs migrate -m 2 -c 2 -H crush 6' to finish migration: rc = -1 LustreError: 10656:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000403:0x23:0x0]: rc = -2 LustreError: 22978:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff960478ebb800: inode [0x240000403:0x23:0x0] mdc close failed: rc = -2 LustreError: 9385:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x1ca:0x0]: rc = -5 LustreError: 9385:0:(llite_lib.c:3787:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 Lustre: 13533:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-MDT0001: opcode 2: before 505 < left 877, rollback = 2 Lustre: 13533:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 364 previous similar messages Lustre: 13533:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/7, destroy: 0/0/0 Lustre: 13533:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 366 previous similar messages Lustre: 13533:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 11/877/0 Lustre: 13533:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 366 previous similar messages Lustre: 13533:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 4/54/0 Lustre: 13533:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 366 previous similar messages Lustre: 13533:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 10/203/4, delete: 0/0/0 Lustre: 13533:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 366 previous similar messages Lustre: 13533:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 4/4/0, ref_del: 0/0/0 Lustre: 13533:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 366 previous similar messages LustreError: 6761:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) lustre-MDD0002: '15' migration was interrupted, run 'lfs migrate -m 0 -c 3 -H crush 15' to finish migration: rc = -1 LustreError: 25173:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x1df:0x0]: rc = -5 LustreError: 25173:0:(llite_lib.c:3787:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 6750:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0002: failed to get lu_attr of [0x280000403:0x1b:0x0]: rc = -2 LustreError: 26034:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9604795be800: inode [0x280000403:0x1b:0x0] mdc close failed: rc = -2 LustreError: 12217:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0000: [0x200000404:0x31f:0x0] migrate mdt count mismatch 2 != 3 Lustre: 12854:0:(mdd_dir.c:4812:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/17 is open, migrate only dentry Lustre: 12854:0:(mdd_dir.c:4812:mdd_migrate_object()) Skipped 5 previous similar messages traps: 2[27226] trap invalid opcode ip:563db22c6a8f sp:7f10c7575778 error:0 in 2[563db22c0000+7000] LustreError: 6761:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000403:0x2aa:0x0]/6 failed: rc = -116 LustreError: 6761:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 7 previous similar messages LustreError: 9344:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x1ca:0x0]: rc = -5 LustreError: 9344:0:(llite_lib.c:3787:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 23498:0:(lov_object.c:1358:lov_layout_change()) lustre-clilov-ffff9604795be800: cannot apply new layout on [0x200000403:0x28b:0x0] : rc = -5 LustreError: 23498:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000403:0x28b:0x0] error -5. LustreError: 9819:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0002: [0x280000403:0x17a:0x0] migrate mdt count mismatch 2 != 1 LustreError: 29790:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x388:0x0]: rc = -5 LustreError: 29790:0:(llite_lib.c:3787:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 27782:0:(llite_lib.c:1889:ll_update_lsm_md()) lustre: [0x280000404:0x27c:0x0] dir layout mismatch: LustreError: 27782:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=4 count=3 index=2 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 27782:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) stripe[0] [0x280000400:0x11:0x0] LustreError: 27782:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=4 index=2 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=3 migrate_hash=fnv_1a_64:2 pool= Lustre: 11296:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 11296:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1 previous similar message LustreError: 11486:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) lustre-MDD0002: '10' migration was interrupted, run 'lfs migrate -m 2 -c 1 -H crush 10' to finish migration: rc = -1 LustreError: 32690:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0002-mdc-ffff9604795be800: dir page locate: [0x280000404:0x15:0x0] at 0: rc -5 Lustre: dir [0x200000403:0x191:0x0] stripe 3 readdir failed: -2, directory is partially accessed! LustreError: 32690:0:(mdc_request.c:1492:mdc_read_page()) Skipped 5 previous similar messages Lustre: Skipped 9 previous similar messages LustreError: 6750:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000403:0xf1:0x0]: rc = -2 LustreError: 6750:0:(mdd_object.c:3901:mdd_close()) Skipped 1 previous similar message LustreError: 28853:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff960478ebb800: inode [0x240000403:0xf1:0x0] mdc close failed: rc = -2 LustreError: 28853:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message LustreError: 11486:0:(lustre_lmv.h:500:lmv_is_sane()) unknown layout LMV: magic=0xcd40cd0 count=4 index=3 hash=crush:0x82000003 version=1 migrate_offset=3 migrate_hash=fnv_1a_64:2 pool= LustreError: 33293:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x1bb:0x0]: rc = -5 LustreError: 33293:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 3 previous similar messages LustreError: 33293:0:(llite_lib.c:3787:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 33293:0:(llite_lib.c:3787:ll_prep_inode()) Skipped 3 previous similar messages LustreError: 19269:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0000: [0x200000404:0x422:0x0] migrate mdt count mismatch 3 != 2 Lustre: 17899:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-MDT0000: opcode 2: before 511 < left 1233, rollback = 2 Lustre: 17899:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 645 previous similar messages Lustre: 17899:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/2, destroy: 0/0/0 Lustre: 17899:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 653 previous similar messages Lustre: 17899:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 15/1233/0 Lustre: 17899:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 653 previous similar messages Lustre: 17899:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 4/54/0 Lustre: 17899:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 653 previous similar messages Lustre: 17899:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 10/203/3, delete: 0/0/0 Lustre: 17899:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 653 previous similar messages Lustre: 17899:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 4/4/0, ref_del: 0/0/0 Lustre: 17899:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 653 previous similar messages Lustre: 14480:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 600, rollback = 7 Lustre: 14480:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 7 previous similar messages Lustre: dir [0x280000404:0x335:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: 11486:0:(mdd_dir.c:4812:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x2:0x0]/4 is open, migrate only dentry Lustre: 11486:0:(mdd_dir.c:4812:mdd_migrate_object()) Skipped 20 previous similar messages LustreError: 37795:0:(lov_object.c:1358:lov_layout_change()) lustre-clilov-ffff9604795be800: cannot apply new layout on [0x200000403:0x53b:0x0] : rc = -5 LustreError: 37795:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000403:0x53b:0x0] error -5. LustreError: 6748:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0000: [0x200000403:0x48f:0x0] migrate mdt count mismatch 1 != 3 LustreError: 36430:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff9604795be800: dir page locate: [0x200000404:0x464:0x0] at 0: rc -5 Lustre: dir [0x200000404:0x5d0:0x0] stripe 2 readdir failed: -2, directory is partially accessed! LustreError: 36430:0:(mdc_request.c:1492:mdc_read_page()) Skipped 3 previous similar messages Lustre: Skipped 4 previous similar messages LustreError: 13839:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x510:0x0]/19 failed: rc = -2 LustreError: 40791:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000404:0x4e5:0x0]: rc = -5 LustreError: 13839:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 15 previous similar messages LustreError: 40791:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 9 previous similar messages LustreError: 40791:0:(llite_lib.c:3787:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 40791:0:(llite_lib.c:3787:ll_prep_inode()) Skipped 9 previous similar messages Lustre: dir [0x240000404:0x535:0x0] stripe 3 readdir failed: -2, directory is partially accessed! LustreError: 38404:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff9604795be800: dir page locate: [0x200000404:0x56b:0x0] at 0: rc -5 Lustre: Skipped 1 previous similar message LustreError: 38404:0:(mdc_request.c:1492:mdc_read_page()) Skipped 2 previous similar messages LustreError: 11545:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0000: failed to get lu_attr of [0x200000404:0x5d0:0x0]: rc = -2 LustreError: 39453:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff960478ebb800: inode [0x200000404:0x5d0:0x0] mdc close failed: rc = -2 LustreError: 16998:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) lustre-MDD0001: '5' migration was interrupted, run 'lfs migrate -m 0 -c 3 -H crush 5' to finish migration: rc = -1 LustreError: 16998:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) Skipped 2 previous similar messages Lustre: dir [0x200000403:0x567:0x0] stripe 3 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message 9[46795]: segfault at 0 ip 0000563a62e84b47 sp 00007ffe8cb3ebf0 error 6 in 9[563a62e80000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 hrtimer: interrupt took 4732229 ns Lustre: 10455:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000403:0x8ca:0x0] with magic=0xbd60bd0 LustreError: 49285:0:(lov_object.c:1358:lov_layout_change()) lustre-clilov-ffff9604795be800: cannot apply new layout on [0x200000403:0x53b:0x0] : rc = -5 LustreError: 9549:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x5d4:0x0]: rc = -5 LustreError: 9549:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 18 previous similar messages LustreError: 9549:0:(llite_lib.c:3787:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 9549:0:(llite_lib.c:3787:ll_prep_inode()) Skipped 18 previous similar messages Lustre: 10838:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 582, rollback = 7 Lustre: 10838:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 15 previous similar messages Lustre: 31097:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000404:0x61d:0x0] with magic=0xbd60bd0 Lustre: 31097:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message LustreError: 23767:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000403:0x75a:0x0]: rc = -2 LustreError: 23767:0:(mdd_object.c:3901:mdd_close()) Skipped 2 previous similar messages LustreError: 48353:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9604795be800: inode [0x240000403:0x75a:0x0] mdc close failed: rc = -2 LustreError: 48353:0:(file.c:248:ll_close_inode_openhandle()) Skipped 2 previous similar messages Lustre: 51402:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000403:0x824:0x0] with magic=0xbd60bd0 Lustre: 51402:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 8[53009]: segfault at 0 ip 000055deda9e2b47 sp 00007fff144a1f60 error 6 in 8[55deda9de000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 53428:0:(lov_object.c:1358:lov_layout_change()) lustre-clilov-ffff9604795be800: cannot apply new layout on [0x200000403:0x53b:0x0] : rc = -5 LustreError: 42938:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000404:0x4e5:0x0] error -5. 15[55040]: segfault at 8 ip 00007feffbf47875 sp 00007ffe9d7f5760 error 4 in ld-2.28.so[7feffbf26000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 19[55008]: segfault at 0 ip 00005577dc599b47 sp 00007fff696b6490 error 6 in 19[5577dc595000+7000] Lustre: mdt00_018: service thread pid 10501 was inactive for 40.074 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 task:mdt00_029 state:I stack:0 pid:12256 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_try+0xae/0x310 [mdt] mdt_getattr_name_lock+0x2249/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] Lustre: Skipped 1 previous similar message task:mdt00_018 state:I stack:0 pid:10501 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] task:mdt00_028 state:I stack:0 pid:12230 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] ? mdt_object_find+0x106/0x480 [mdt] ? lustre_msg_add_version+0x29/0xd0 [ptlrpc] mdt_object_find_lock+0x72/0x1c0 [mdt] mdt_reint_setxattr+0x1ba/0x1830 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ? osd_xattr_get+0x2dd/0x940 [osd_ldiskfs] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Lustre: mdt00_030: service thread pid 16444 was inactive for 40.991 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. LustreError: 54069:0:(lov_object.c:1358:lov_layout_change()) lustre-clilov-ffff9604795be800: cannot apply new layout on [0x200000403:0x578:0x0] : rc = -5 mdt_object_lock_internal+0x20b/0x5a0 [mdt] LustreError: 54069:0:(lov_object.c:1358:lov_layout_change()) Skipped 2 previous similar messages LustreError: 54069:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000403:0x578:0x0] error -5. ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_try+0xae/0x310 [mdt] ? __might_sleep+0x59/0xc0 mdt_object_open_lock+0x344/0x1100 [mdt] mdt_open_by_fid_lock+0xcad/0x1170 [mdt] mdt_reint_open+0x943/0x3c10 [mdt] ? sptlrpc_svc_alloc_rs+0x70/0x460 [ptlrpc] ? lustre_msg_add_version+0x29/0xd0 [ptlrpc] ? lustre_pack_reply_v2+0x282/0x380 [ptlrpc] ? ucred_set_audit_enabled.isra.12+0x28/0xa0 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_intent_open+0x180/0x5b0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_fixup_resent+0x2e0/0x2e0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 11240:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0002: [0x280000403:0x496:0x0] migrate mdt count mismatch 2 != 1 LustreError: 12854:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) lustre-MDD0001: '6' migration was interrupted, run 'lfs migrate -m 0 -c 3 -H crush 6' to finish migration: rc = -1 LustreError: 12854:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) Skipped 2 previous similar messages Lustre: mdt00_013: service thread pid 10444 was inactive for 43.848 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: 11240:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000403:0x7f6:0x0] with magic=0xbd60bd0 Lustre: 11240:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages Lustre: dir [0x280000404:0x6c2:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message LustreError: 56685:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0001-mdc-ffff9604795be800: dir page locate: [0x240000403:0x9ab:0x0] at 0: rc -5 Lustre: 19270:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000403:0x9aa:0x0] with magic=0xbd60bd0 Lustre: 19270:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 9 previous similar messages LustreError: 23767:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0002: failed to get lu_attr of [0x280000404:0x120:0x0]: rc = -2 LustreError: 23767:0:(mdd_object.c:3901:mdd_close()) Skipped 1 previous similar message LustreError: 56025:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff960478ebb800: inode [0x280000404:0x120:0x0] mdc close failed: rc = -2 LustreError: 56025:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message LustreError: 57099:0:(lov_object.c:1358:lov_layout_change()) lustre-clilov-ffff960478ebb800: cannot apply new layout on [0x200000404:0x4e5:0x0] : rc = -5 LustreError: 57099:0:(lov_object.c:1358:lov_layout_change()) Skipped 1 previous similar message Lustre: 7536:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 7536:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1670 previous similar messages Lustre: 7536:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 7536:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1670 previous similar messages Lustre: 7536:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 7536:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1670 previous similar messages Lustre: 7536:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 7536:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1670 previous similar messages Lustre: 7536:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 7536:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1670 previous similar messages LustreError: 11350:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0000: migrate [0x240000403:0xaf6:0x0]/10 failed: rc = -2 LustreError: 11350:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 23 previous similar messages 2[58955]: segfault at 8 ip 00007fc82ee22875 sp 00007ffc9b333c80 error 4 in ld-2.28.so[7fc82ee01000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 13332:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-MDT0002: opcode 2: before 501 < left 1500, rollback = 2 Lustre: 13332:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1644 previous similar messages Lustre: 13724:0:(mdd_dir.c:4812:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/19 is open, migrate only dentry Lustre: 13724:0:(mdd_dir.c:4812:mdd_migrate_object()) Skipped 21 previous similar messages LustreError: 156:0:(lov_object.c:1358:lov_layout_change()) lustre-clilov-ffff9604795be800: cannot apply new layout on [0x200000403:0x578:0x0] : rc = -5 LustreError: 156:0:(lov_object.c:1358:lov_layout_change()) Skipped 2 previous similar messages LustreError: 156:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 17 [0x200000403:0x578:0x0] inode@0000000000000000: rc = -5 Lustre: 10283:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000404:0x766:0x0] with magic=0xbd60bd0 Lustre: 10283:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 7 previous similar messages LustreError: 16998:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) lustre-MDD0001: '13' migration was interrupted, run 'lfs migrate -m 0 -c 2 -H crush 13' to finish migration: rc = -1 LustreError: 16998:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) Skipped 2 previous similar messages Lustre: 16998:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0x7d9:0x0] with magic=0xbd60bd0 Lustre: 16998:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 19 previous similar messages LustreError: 853:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000403:0x53b:0x0]: rc = -5 LustreError: 853:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 69 previous similar messages LustreError: 853:0:(llite_lib.c:3787:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 853:0:(llite_lib.c:3787:ll_prep_inode()) Skipped 69 previous similar messages LustreError: 853:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 19 [0x200000403:0x53b:0x0] inode@0000000000000000: rc = -5 LustreError: 853:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 3 previous similar messages Lustre: 40956:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 40956:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 25 previous similar messages LustreError: 6737:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0002_UUID lock: ffff9604d76ef200/0x8c57fda7fcc83b63 lrc: 3/0,0 mode: PR/PR res: [0x280000403:0x59f:0x0].0x0 bits 0x13/0x0 rrc: 8 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x8c57fda7fcc83aec expref: 425 pid: 9859 timeout: 273 lvb_type: 0 LustreError: 9867:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff960508952300 x1839216743880960/t0(0) o104->lustre-MDT0002@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: lustre-MDT0002-mdc-ffff960478ebb800: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0002-mdc-ffff960478ebb800: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: mdt00_030: service thread pid 16444 completed after 102.460s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_028: service thread pid 12230 completed after 101.604s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_029: service thread pid 12256 completed after 101.604s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 54834:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 000000003299d54e ns: mdt-lustre-MDT0002_UUID lock: ffff9605196b5a00/0x8c57fda7fce0cb15 lrc: 3/0,0 mode: PR/PR res: [0x280000404:0x79d:0x0].0x0 bits 0x12/0x0 rrc: 4 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x8c57fda7fce0cb00 expref: 138 pid: 54834 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0002-mdc-ffff960478ebb800: This client was evicted by lustre-MDT0002; in progress operations using this service will fail. Lustre: mdt00_018: service thread pid 10501 completed after 101.625s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_013: service thread pid 10444 completed after 101.328s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 70182:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x280000403:0x789:0x0] error: rc = -5 LustreError: 70182:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 1 previous similar message LustreError: 68616:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff960478ebb800: inode [0x280000403:0x85c:0x0] mdc close failed: rc = -108 LustreError: 68616:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message LustreError: 66570:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0002-mdc-ffff960478ebb800: [0x280000401:0x20:0x0] lock enqueue fails: rc = -5 Lustre: dir [0x280000404:0x77d:0x0] stripe 4 readdir failed: -5, directory is partially accessed! LustreError: 41985:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 Lustre: Skipped 8 previous similar messages Lustre: lustre-MDT0002-mdc-ffff960478ebb800: Connection restored to (at 0@lo) LustreError: 65724:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0002: [0x280000403:0x789:0x0] migrate mdt count mismatch 1 != 2 LustreError: 71968:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0001-mdc-ffff9604795be800: dir page locate: [0x240000400:0xc:0x0] at 0: rc -5 LustreError: 71968:0:(mdc_request.c:1492:mdc_read_page()) Skipped 4 previous similar messages LustreError: 23767:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0000: failed to get lu_attr of [0x200000404:0xf54:0x0]: rc = -2 LustreError: 23767:0:(mdd_object.c:3901:mdd_close()) Skipped 1 previous similar message Lustre: 19273:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0xfe7:0x0] with magic=0xbd60bd0 Lustre: 19273:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages 3[78676]: segfault at 8 ip 00007f24820ef875 sp 00007ffcc8d2b490 error 4 in ld-2.28.so[7f24820ce000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 3[80152]: segfault at 8 ip 00007f49d0a9a875 sp 00007ffea5fe3650 error 4 in ld-2.28.so[7f49d0a79000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 8[84097]: segfault at 8 ip 00007f5275748875 sp 00007ffff713b520 error 4 in ld-2.28.so[7f5275727000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 14398:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) lustre-MDD0001: '15' migration was interrupted, run 'lfs migrate -m 2 -c 1 -H crush 15' to finish migration: rc = -1 LustreError: 14398:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) Skipped 8 previous similar messages Lustre: dir [0x280000403:0x789:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 16 previous similar messages LustreError: 85944:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0001-mdc-ffff960478ebb800: dir page locate: [0x240000403:0x133e:0x0] at 0: rc -5 LustreError: 85944:0:(mdc_request.c:1492:mdc_read_page()) Skipped 9 previous similar messages LustreError: 88967:0:(lov_object.c:1358:lov_layout_change()) lustre-clilov-ffff960478ebb800: cannot apply new layout on [0x280000404:0x99d:0x0] : rc = -5 LustreError: 88967:0:(lov_object.c:1358:lov_layout_change()) Skipped 5 previous similar messages LustreError: 88967:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x280000404:0x99d:0x0] error -5. LustreError: 156:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 1 [0x280000404:0x99d:0x0] inode@0000000000000000: rc = -5 LustreError: 156:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 1 previous similar message traps: 14[99339] trap invalid opcode ip:55b49cefce44 sp:7ffe15ecb210 error:0 in 14[55b49cef8000+7000] Lustre: 6761:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/4, destroy: 1/4/0 Lustre: 6761:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1760 previous similar messages Lustre: 6761:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 4/4/0, xattr_set: 29/2346/0 Lustre: 6761:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1760 previous similar messages Lustre: 6761:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 7/129/0 Lustre: 6761:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1760 previous similar messages Lustre: 6761:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 15/263/3, delete: 3/6/0 Lustre: 6761:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1760 previous similar messages Lustre: 6761:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 8/8/0, ref_del: 3/3/0 Lustre: 6761:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1760 previous similar messages LustreError: 6761:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x1:0x0]/14 failed: rc = -1 LustreError: 6761:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 42 previous similar messages Lustre: 17021:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-MDT0002: opcode 2: before 505 < left 816, rollback = 2 Lustre: 17021:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 1773 previous similar messages Lustre: 6761:0:(mdd_dir.c:4812:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/2 is open, migrate only dentry Lustre: 6761:0:(mdd_dir.c:4812:mdd_migrate_object()) Skipped 66 previous similar messages LustreError: 10656:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000403:0x1b05:0x0]: rc = -2 LustreError: 103699:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff960478ebb800: inode [0x240000403:0x1b05:0x0] mdc close failed: rc = -2 LustreError: 103699:0:(file.c:248:ll_close_inode_openhandle()) Skipped 21 previous similar messages 7[104746]: segfault at 55dce60d5000 ip 000055dce60d5000 sp 00007ffdc82eafb8 error 14 in 7[55dce62d5000+1000] Code: Unable to access opcode bytes at RIP 0x55dce60d4fd6. Lustre: 7622:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000403:0x1a04:0x0] with magic=0xbd60bd0 Lustre: 7622:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 43 previous similar messages LustreError: 106936:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x280000405:0x646:0x0] error -5. LustreError: 106936:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message LustreError: 113052:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000404:0xbae:0x0]: rc = -5 LustreError: 113052:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 139 previous similar messages LustreError: 113052:0:(llite_lib.c:3787:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 113052:0:(llite_lib.c:3787:ll_prep_inode()) Skipped 139 previous similar messages LustreError: 113516:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0001-mdc-ffff960478ebb800: dir page locate: [0x240000403:0x1fcd:0x0] at 0: rc -5 Lustre: dir [0x240000403:0x1ff6:0x0] stripe 2 readdir failed: -5, directory is partially accessed! LustreError: 113516:0:(mdc_request.c:1492:mdc_read_page()) Skipped 5 previous similar messages Lustre: Skipped 15 previous similar messages Lustre: 21063:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 21063:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 63 previous similar messages LustreError: 116215:0:(llite_lib.c:1889:ll_update_lsm_md()) lustre: [0x280000405:0x879:0x0] dir layout mismatch: LustreError: 116215:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=1 index=2 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 116215:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) stripe[0] [0x280000400:0x3e:0x0] LustreError: 116215:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) Skipped 6 previous similar messages LustreError: 116215:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=2 index=2 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=1 migrate_hash=fnv_1a_64:2 pool= LustreError: 116133:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=2 count=1 index=2 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 116133:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=2 index=2 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=1 migrate_hash=fnv_1a_64:2 pool= LustreError: 116311:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=2 count=1 index=2 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 116311:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=2 index=2 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=1 migrate_hash=fnv_1a_64:2 pool= LustreError: 116209:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=3 count=1 index=2 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 116209:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=2 index=2 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=1 migrate_hash=fnv_1a_64:2 pool= LustreError: 116308:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=1 index=2 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 116308:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=2 index=2 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=1 migrate_hash=fnv_1a_64:2 pool= 3[116957]: segfault at 8 ip 00007f168834e875 sp 00007ffcd89a3d50 error 4 in ld-2.28.so[7f168832d000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 10[117750]: segfault at 0 ip 000055d775c9d200 sp 00007ffeaa99a2e8 error 6 in 10[55d775c9b000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 117626:0:(lov_object.c:1358:lov_layout_change()) lustre-clilov-ffff960478ebb800: cannot apply new layout on [0x280000404:0x6b5:0x0] : rc = -5 LustreError: 117626:0:(lov_object.c:1358:lov_layout_change()) Skipped 15 previous similar messages LustreError: 6737:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0002_UUID lock: ffff9604d8f30200/0x8c57fda7fcec0694 lrc: 3/0,0 mode: PR/PR res: [0x280000404:0xa2e:0x0].0x0 bits 0x1b/0x0 rrc: 17 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x8c57fda7fcec0624 expref: 482 pid: 31309 timeout: 408 lvb_type: 0 LustreError: 20874:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 000000004f7ab716 ns: mdt-lustre-MDT0002_UUID lock: ffff9604d76eca00/0x8c57fda7fcef2122 lrc: 3/0,0 mode: PR/PR res: [0x280000404:0xa2e:0x0].0x0 bits 0x1b/0x0 rrc: 13 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x8c57fda7fcef2114 expref: 159 pid: 20874 timeout: 0 lvb_type: 0 LustreError: 20874:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 1 previous similar message LustreError: lustre-MDT0002-mdc-ffff9604795be800: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: Skipped 3 previous similar messages Lustre: lustre-MDT0002-mdc-ffff9604795be800: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0002-mdc-ffff9604795be800: This client was evicted by lustre-MDT0002; in progress operations using this service will fail. LustreError: 85675:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x280000404:0xa2e:0x0] error: rc = -5 LustreError: 86654:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 85675:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 12 previous similar messages LustreError: 86654:0:(llite_lib.c:2040:ll_md_setattr()) Skipped 2 previous similar messages LustreError: 9358:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x280000405:0x2df:0x0] error -108. LustreError: 87207:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0002-mdc-ffff9604795be800: [0x280000402:0x16:0x0] lock enqueue fails: rc = -108 LustreError: 87207:0:(mdc_request.c:1477:mdc_read_page()) Skipped 7 previous similar messages Lustre: lustre-MDT0002-mdc-ffff9604795be800: Connection restored to (at 0@lo) 4[87215]: segfault at 8 ip 00007fe89fd81875 sp 00007ffcf963bc20 error 4 in ld-2.28.so[7fe89fd60000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 11184:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0000: [0x200000404:0x8ed:0x0] migrate mdt count mismatch 3 != 2 LustreError: 11184:0:(mdt_xattr.c:406:mdt_dir_layout_update()) Skipped 3 previous similar messages 2[123783]: segfault at 7ffe49062950 ip 00007ffe49062950 sp 00007ffe490627c8 error 15 Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 5e 6b d0 d0 ac 55 00 00 48 29 06 49 fe 7f 00 00 30 11 be ee f9 7f 00 00 <02> 00 00 00 00 00 00 00 33 3d 06 49 fe 7f 00 00 48 3d 06 49 fe 7f Lustre: mdt_io00_008: service thread pid 12549 was inactive for 40.828 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: mdt_io00_018: service thread pid 14398 was inactive for 42.522 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 14 previous similar messages Lustre: mdt_io00_014: service thread pid 13839 was inactive for 42.966 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 2 previous similar messages INFO: task mrename:87096 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mrename state:D stack:0 pid:87096 ppid:9517 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 ? filename_parentat.isra.44+0x153/0x220 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 __mutex_lock.isra.10+0x93e/0xec0 __mutex_lock_slowpath+0x1f/0x30 mutex_lock+0x5b/0x70 lock_rename+0x33/0x160 do_renameat2+0x313/0x730 __x64_sys_rename+0x24/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7f03eb2976cb Code: Unable to access opcode bytes at RIP 0x7f03eb2976a1. RSP: 002b:00007fff0745bdd8 EFLAGS: 00000206 ORIG_RAX: 0000000000000052 RAX: ffffffffffffffda RBX: 00007fff0745bec8 RCX: 00007f03eb2976cb RDX: 00007fff0745bee8 RSI: 00007fff0745ccf4 RDI: 00007fff0745ccdb RBP: 0000000000400800 R08: 00007f03eb5e6d20 R09: 00007f03eb5e6d20 R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000400710 R13: 00007fff0745bec0 R14: 0000000000000000 R15: 0000000000000000 INFO: task mrename:87642 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mrename state:D stack:0 pid:87642 ppid:9540 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 ? filename_parentat.isra.44+0x153/0x220 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 __mutex_lock.isra.10+0x93e/0xec0 __mutex_lock_slowpath+0x1f/0x30 mutex_lock+0x5b/0x70 lock_rename+0x33/0x160 do_renameat2+0x313/0x730 __x64_sys_rename+0x24/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7fc79adc66cb Code: Unable to access opcode bytes at RIP 0x7fc79adc66a1. RSP: 002b:00007ffc3ad677c8 EFLAGS: 00000202 ORIG_RAX: 0000000000000052 RAX: ffffffffffffffda RBX: 00007ffc3ad678b8 RCX: 00007fc79adc66cb RDX: 00007ffc3ad678d8 RSI: 00007ffc3ad68cf4 RDI: 00007ffc3ad68cdd RBP: 0000000000400800 R08: 00007fc79b115d20 R09: 00007fc79b115d20 R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000400710 R13: 00007ffc3ad678b0 R14: 0000000000000000 R15: 0000000000000000 INFO: task mrename:89090 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mrename state:D stack:0 pid:89090 ppid:9638 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 ? filename_parentat.isra.44+0x153/0x220 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 __mutex_lock.isra.10+0x93e/0xec0 __mutex_lock_slowpath+0x1f/0x30 mutex_lock+0x5b/0x70 lock_rename+0x33/0x160 do_renameat2+0x313/0x730 __x64_sys_rename+0x24/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7fe3916346cb Code: Unable to access opcode bytes at RIP 0x7fe3916346a1. RSP: 002b:00007ffe92f14618 EFLAGS: 00000206 ORIG_RAX: 0000000000000052 RAX: ffffffffffffffda RBX: 00007ffe92f14708 RCX: 00007fe3916346cb RDX: 00007ffe92f14728 RSI: 00007ffe92f15cf3 RDI: 00007ffe92f15cd9 RBP: 0000000000400800 R08: 00007fe391983d20 R09: 00007fe391983d20 R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000400710 R13: 00007ffe92f14700 R14: 0000000000000000 R15: 0000000000000000 INFO: task mrename:92418 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mrename state:D stack:0 pid:92418 ppid:9346 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 ? filename_parentat.isra.44+0x153/0x220 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 __mutex_lock.isra.10+0x93e/0xec0 __mutex_lock_slowpath+0x1f/0x30 mutex_lock+0x5b/0x70 lock_rename+0x33/0x160 do_renameat2+0x313/0x730 __x64_sys_rename+0x24/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7f2a44dd26cb Code: Unable to access opcode bytes at RIP 0x7f2a44dd26a1. RSP: 002b:00007ffd8d0d74b8 EFLAGS: 00000206 ORIG_RAX: 0000000000000052 RAX: ffffffffffffffda RBX: 00007ffd8d0d75a8 RCX: 00007f2a44dd26cb RDX: 00007ffd8d0d75c8 RSI: 00007ffd8d0d9cf2 RDI: 00007ffd8d0d9cdd RBP: 0000000000400800 R08: 00007f2a45121d20 R09: 00007f2a45121d20 R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000400710 R13: 00007ffd8d0d75a0 R14: 0000000000000000 R15: 0000000000000000 INFO: task mrename:93614 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mrename state:D stack:0 pid:93614 ppid:9498 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 ? filename_parentat.isra.44+0x153/0x220 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 __mutex_lock.isra.10+0x93e/0xec0 __mutex_lock_slowpath+0x1f/0x30 mutex_lock+0x5b/0x70 lock_rename+0x33/0x160 do_renameat2+0x313/0x730 __x64_sys_rename+0x24/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7fb4a78696cb Code: Unable to access opcode bytes at RIP 0x7fb4a78696a1. RSP: 002b:00007fff0398c828 EFLAGS: 00000206 ORIG_RAX: 0000000000000052 RAX: ffffffffffffffda RBX: 00007fff0398c918 RCX: 00007fb4a78696cb RDX: 00007fff0398c938 RSI: 00007fff0398dcf6 RDI: 00007fff0398dcdf RBP: 0000000000400800 R08: 00007fb4a7bb8d20 R09: 00007fb4a7bb8d20 R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000400710 R13: 00007fff0398c910 R14: 0000000000000000 R15: 0000000000000000 LustreError: 6737:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: filter-lustre-OST0000_UUID lock: ffff9604d81de000/0x8c57fda7fd0b28ed lrc: 3/0,0 mode: PW/PW res: [0x2c0000401:0x1dd:0x0].0x0 rrc: 7 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000400020020 nid: 0@lo remote: 0x8c57fda7fd0b28e6 expref: 32 pid: 13644 timeout: 521 lvb_type: 0 LustreError: 12549:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff9604aea21180 x1839216803893120/t0(0) o105->lustre-MDT0000@0@lo:15/16 lens 336/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 Lustre: mdt_io00_008: service thread pid 12549 completed after 106.428s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 10283:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) ### lock on destroyed export 00000000c97c0900 ns: mdt-lustre-MDT0000_UUID lock: ffff96051ac10800/0x8c57fda7fd0da4b9 lrc: 3/0,0 mode: PR/PR res: [0x200000403:0x1db6:0x0].0x0 bits 0x1b/0x0 rrc: 2 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x8c57fda7fd0a449b expref: 291 pid: 10283 timeout: 0 lvb_type: 0 LustreError: 10283:0:(ldlm_lockd.c:1453:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: lustre-MDT0000-mdc-ffff960478ebb800: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: Skipped 4 previous similar messages LustreError: 8414:0:(ldlm_lockd.c:2564:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1754014197 with bad export cookie 10112830386224154947 Lustre: lustre-MDT0000-mdc-ffff960478ebb800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff960478ebb800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 117304:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff960478ebb800: inode [0x200000403:0x1c2b:0x0] mdc close failed: rc = -108 LustreError: 121396:0:(llite_lib.c:2040:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 117304:0:(file.c:248:ll_close_inode_openhandle()) Skipped 33 previous similar messages Lustre: mdt_io00_021: service thread pid 17050 completed after 105.564s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 120841:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff960478ebb800: [0x200000403:0x2:0x0] lock enqueue fails: rc = -5 LustreError: 120415:0:(statahead.c:1807:is_first_dirent()) lustre: reading dir [0x200000403:0x2:0x0] at 0 stat_pid = 121517 : rc = -5 LustreError: 120635:0:(file.c:6072:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000403:0x1dbc:0x0] error: rc = -108 LustreError: 120635:0:(file.c:6072:ll_inode_revalidate_fini()) Skipped 11 previous similar messages LustreError: 120841:0:(mdc_request.c:1477:mdc_read_page()) Skipped 5 previous similar messages LustreError: 120415:0:(statahead.c:1807:is_first_dirent()) Skipped 8 previous similar messages LustreError: lustre-OST0000-osc-ffff9604795be800: This client was evicted by lustre-OST0000; in progress operations using this service will fail. Lustre: mdt_io00_017: service thread pid 14234 completed after 105.050s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_010: service thread pid 13190 completed after 104.925s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_015: service thread pid 13963 completed after 104.347s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_018: service thread pid 14398 completed after 104.236s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_020: service thread pid 17021 completed after 104.110s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_001: service thread pid 6761 completed after 104.454s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: 54840:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000403:0x11f3:0x0] with magic=0xbd60bd0 Lustre: 54840:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 19 previous similar messages Lustre: mdt_io00_013: service thread pid 13724 completed after 103.501s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_006: service thread pid 11350 completed after 103.164s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_019: service thread pid 16998 completed after 102.980s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_016: service thread pid 14098 completed after 102.971s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_012: service thread pid 13533 completed after 102.412s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_011: service thread pid 13332 completed after 102.518s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_009: service thread pid 12854 completed after 102.440s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_022: service thread pid 17899 completed after 102.614s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_000: service thread pid 6760 completed after 101.997s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: lustre-MDT0000-mdc-ffff960478ebb800: Connection restored to (at 0@lo) Lustre: mdt_io00_007: service thread pid 11486 completed after 101.915s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_014: service thread pid 13839 completed after 101.950s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 23767:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000404:0x806:0x0]: rc = -2 LustreError: 23767:0:(mdd_object.c:3901:mdd_close()) Skipped 2 previous similar messages LustreError: 11486:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) lustre-MDD0002: '13' migration was interrupted, run 'lfs migrate -m 0 -c 1 -H crush 13' to finish migration: rc = -1 LustreError: 11486:0:(mdd_dir.c:4733:mdd_migrate_cmd_check()) Skipped 12 previous similar messages LustreError: 179901:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-OST0000-osc-ffff9604795be800: namespace resource [0x2c0000401:0x1dd:0x0].0x0 (ffff96050b941400) refcount nonzero (3) after lock cleanup; forcing cleanup. Lustre: lustre-OST0000-osc-ffff9604795be800: Connection restored to (at 0@lo) Lustre: 6761:0:(mdt_reint.c:2484:mdt_reint_migrate()) lustre-MDT0000: [0x200000403:0x2:0x0]/14 is open, migrate only dentry 14[185101]: segfault at 8 ip 00007fa066f9b875 sp 00007ffd7b1171a0 error 4 in ld-2.28.so[7fa066f7a000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: dir [0x200000405:0x14a:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 24 previous similar messages traps: 17[191079] general protection fault ip:55c848e7166c sp:7ffec23d4298 error:0 in 17[55c848e6b000+7000] 5[191322]: segfault at 8 ip 00007f00112bd875 sp 00007ffc41921710 error 4 in ld-2.28.so[7f001129c000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 12[191835]: segfault at 8 ip 00007f3c50c38875 sp 00007fff43bdf780 error 4 in ld-2.28.so[7f3c50c17000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 191393:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0002-mdc-ffff9604795be800: dir page locate: [0x280000405:0xa35:0x0] at 0: rc -5 LustreError: 191393:0:(mdc_request.c:1492:mdc_read_page()) Skipped 7 previous similar messages LustreError: 193494:0:(lov_object.c:1358:lov_layout_change()) lustre-clilov-ffff960478ebb800: cannot apply new layout on [0x280000404:0x6b5:0x0] : rc = -5 LustreError: 193494:0:(lov_object.c:1358:lov_layout_change()) Skipped 1 previous similar message 17[199941]: segfault at 8 ip 00007efc81b43875 sp 00007ffc7a7f23c0 error 4 in ld-2.28.so[7efc81b22000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 31091:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0002: [0x280000405:0xd7b:0x0] migrate mdt count mismatch 3 != 1 LustreError: 31091:0:(mdt_xattr.c:406:mdt_dir_layout_update()) Skipped 1 previous similar message 6[207178]: segfault at 8 ip 00007fa795346875 sp 00007ffef1ee6e40 error 4 in ld-2.28.so[7fa795325000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 203421:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000404:0x1cdf:0x0] error -5. LustreError: 6761:0:(lustre_lmv.h:500:lmv_is_sane()) unknown layout LMV: magic=0xcd40cd0 count=4 index=3 hash=crush:0x82000003 version=1 migrate_offset=3 migrate_hash=fnv_1a_64:2 pool= 2[213044]: segfault at 8 ip 00007f1d1b416875 sp 00007ffc785fe770 error 4 in ld-2.28.so[7f1d1b3f5000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 traps: 1[224148] general protection fault ip:558758331081 sp:7ffd7353a070 error:0 in 1[55875832d000+7000] LustreError: 219089:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0001-mdc-ffff9604795be800: dir page locate: [0x240000403:0x5257:0x0] at 0: rc -5 LustreError: 219089:0:(mdc_request.c:1492:mdc_read_page()) Skipped 15 previous similar messages Lustre: 17050:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/3, destroy: 0/0/0 Lustre: 17050:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 4062 previous similar messages Lustre: 17050:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 4/4/0, xattr_set: 11/727/0 Lustre: 17050:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 4062 previous similar messages Lustre: 17050:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 4/78/0 Lustre: 17050:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 4062 previous similar messages Lustre: 17050:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 14/287/3, delete: 0/0/0 Lustre: 17050:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 4062 previous similar messages Lustre: 17050:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 8/8/0, ref_del: 0/0/0 Lustre: 17050:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 4062 previous similar messages 6[227893]: segfault at 8 ip 00007f1d1c754875 sp 00007fff17a4cfa0 error 4 in ld-2.28.so[7f1d1c733000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 10859:0:(lustre_lmv.h:500:lmv_is_sane()) unknown layout LMV: magic=0xcd40cd0 count=2 index=1 hash=crush:0x82000003 version=1 migrate_offset=1 migrate_hash=fnv_1a_64:2 pool= LustreError: 10859:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0001: migrate [0x240000404:0x1b5b:0x0]/0 failed: rc = -9 LustreError: 10859:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 63 previous similar messages LustreError: 228384:0:(llite_lib.c:1889:ll_update_lsm_md()) lustre: [0x200000405:0x709:0x0] dir layout mismatch: LustreError: 228384:0:(llite_lib.c:1889:ll_update_lsm_md()) Skipped 4 previous similar messages LustreError: 228384:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=2 count=2 index=0 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 228384:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) stripe[0] [0x200000400:0x6a:0x0] LustreError: 228384:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) Skipped 14 previous similar messages LustreError: 228384:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=4 index=0 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=2 migrate_hash=crush:3 pool= Lustre: 12854:0:(osd_internal.h:1334:osd_trans_exec_op()) lustre-MDT0002: opcode 2: before 512 < left 816, rollback = 2 Lustre: 12854:0:(osd_internal.h:1334:osd_trans_exec_op()) Skipped 3854 previous similar messages Lustre: 14398:0:(mdd_dir.c:4812:mdd_migrate_object()) lustre-MDD0000: [0x200000404:0x1a50:0x0]/8 is open, migrate only dentry Lustre: 14398:0:(mdd_dir.c:4812:mdd_migrate_object()) Skipped 77 previous similar messages | Link to test |
racer test 2: racer rename: centos-115.localnet DURATION=2700 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 2ec8f1067 P4D 2ec8f1067 PUD 2f7fba067 PMD 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 5 PID: 794049 Comm: ll_sa_794003 Kdump: loaded Tainted: G W O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 e1 cd 87 ff 48 83 05 a9 53 ce 02 01 39 05 67 34 75 01 77 cf 48 83 05 a9 53 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 b4 53 ce 02 01 83 f8 01 74 2b 48 83 05 b7 53 ce 02 RSP: 0018:ffffa6cd4c6dbe90 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000020001b RDX: 000000000020001c RSI: ffff905704888448 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff90570bcdf200 R11: 0000000000008962 R12: ffff905704888400 R13: ffff90570bcdf2b8 R14: ffff9057048880c8 R15: ffff905704888448 FS: 0000000000000000(0000) GS:ffff9057f2340000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000002f7fbf000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? update_load_avg+0x9f/0xa40 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? _raw_spin_unlock_irqrestore+0x2b/0x60 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) ec(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix libata serio_raw dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CR2: 0000000000000008 | Lustre: 391018:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff905679302680 x1838929363914368/t4295179259(0) o101->dde8d149-089a-45e1-b2bc-6108fecbfdef@0@lo:208/0 lens 376/20504 e 0 to 0 dl 1753742143 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 15575:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff90567fb9bb80 x1838929366317440/t4295179737(0) o101->843f4068-2859-4f24-a27e-f0bef0f75960@0@lo:210/0 lens 376/45376 e 0 to 0 dl 1753742145 ref 1 fl Interpret:H/602/0 rc 0/0 job:'lfs.0' uid:0 gid:0 projid:0 Lustre: 17762:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff905632b57a80 x1838929369502720/t4295180162(0) o101->843f4068-2859-4f24-a27e-f0bef0f75960@0@lo:214/0 lens 376/47368 e 0 to 0 dl 1753742149 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 31459:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff9056ea4c2680 x1838929372536576/t4295180325(0) o101->dde8d149-089a-45e1-b2bc-6108fecbfdef@0@lo:218/0 lens 376/47368 e 0 to 0 dl 1753742153 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 402600:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff9056fe7fb480 x1838929382959360/t4295181278(0) o101->843f4068-2859-4f24-a27e-f0bef0f75960@0@lo:231/0 lens 376/47368 e 0 to 0 dl 1753742166 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 402600:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 2 previous similar messages Lustre: 402372:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff9056f5e5db00 x1838929394977792/t4295265604(0) o101->dde8d149-089a-45e1-b2bc-6108fecbfdef@0@lo:241/0 lens 376/44056 e 0 to 0 dl 1753742176 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 ODEBUG: object 000000004171eeae is on stack 00000000af2caf61, but NOT annotated. WARNING: CPU: 15 PID: 391056 at lib/debugobjects.c:368 __debug_object_init.cold.5+0x35/0x15f Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) ec(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix libata serio_raw dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CPU: 15 PID: 391056 Comm: mdt00_060 Kdump: loaded Tainted: G W O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:__debug_object_init.cold.5+0x35/0x15f Code: 5e 9e 48 83 05 33 38 0c 03 01 89 05 69 40 0c 03 65 48 8b 04 25 00 dd 01 00 48 8b 50 18 e8 43 87 99 ff 48 83 05 2b 38 0c 03 01 <0f> 0b 48 83 05 29 38 0c 03 01 48 83 05 29 38 0c 03 01 e9 7f ee ff RSP: 0018:ffffa6cd53bf74a0 EFLAGS: 00010002 RAX: 0000000000000050 RBX: ffffa6cd53bf75a8 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffff9057f25de5a8 RDI: ffff9057f25de5a8 RBP: ffffffff9ed05ca0 R08: 0000000000000000 R09: c0000000ffff7fff R10: 0000000000000001 R11: ffffa6cd53bf7298 R12: ffffffffa05079e8 R13: 0000000000020680 R14: ffffffffa05079e0 R15: ffff9056d04882d0 FS: 0000000000000000(0000) GS:ffff9057f25c0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f9b735dd4c0 CR3: 000000025c448000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __warn+0xc8/0x150 ? __debug_object_init.cold.5+0x35/0x15f ? report_bug+0x113/0x140 ? do_error_trap+0xb6/0x130 ? do_invalid_op+0x46/0x60 ? __debug_object_init.cold.5+0x35/0x15f ? invalid_op+0x14/0x20 ? __debug_object_init.cold.5+0x35/0x15f ? lod_set_pool+0x270/0x270 [lod] debug_object_init+0x22/0x30 init_timer_key+0x28/0x120 lod_ost_alloc_qos+0x770/0x1c30 [lod] ? __might_sleep+0x59/0xc0 ? slab_post_alloc_hook+0x66/0x380 ? lod_qos_prep_create+0x390/0x1be0 [lod] ? __kmalloc+0x1b4/0x4a0 lod_qos_prep_create+0x1378/0x1be0 [lod] lod_prepare_create+0x204/0x460 [lod] ? osd_declare_create+0x4a2/0x7a0 [osd_ldiskfs] lod_declare_striped_create+0x270/0xf80 [lod] ? lod_sub_declare_create+0x111/0x320 [lod] lod_declare_create+0x3d4/0x9c0 [lod] mdd_declare_create_object_internal+0x107/0x4a0 [mdd] ? lod_alloc_comp_entries+0x2a7/0x650 [lod] mdd_declare_create_object.isra.25+0x55/0xc40 [mdd] mdd_declare_create+0x6a/0x6c0 [mdd] mdd_create+0x5bd/0x1d00 [mdd] ? mdt_version_save+0xa8/0x210 [mdt] mdt_reint_open+0x337c/0x3c10 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_intent_open+0x180/0x5b0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_fixup_resent+0x2e0/0x2e0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ---[ end trace c96f9a86c70b527b ]--- ODEBUG: object 00000000e68df7ca is on stack 00000000fc08133e, but NOT annotated. WARNING: CPU: 12 PID: 392081 at lib/debugobjects.c:368 __debug_object_init.cold.5+0x35/0x15f Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) ec(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix libata serio_raw dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CPU: 12 PID: 392081 Comm: mdt00_070 Kdump: loaded Tainted: G W O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:__debug_object_init.cold.5+0x35/0x15f Code: 5e 9e 48 83 05 33 38 0c 03 01 89 05 69 40 0c 03 65 48 8b 04 25 00 dd 01 00 48 8b 50 18 e8 43 87 99 ff 48 83 05 2b 38 0c 03 01 <0f> 0b 48 83 05 29 38 0c 03 01 48 83 05 29 38 0c 03 01 e9 7f ee ff RSP: 0018:ffffa6cd541874a0 EFLAGS: 00010006 RAX: 0000000000000050 RBX: ffffa6cd541875a8 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffff9057f251e5a8 RDI: ffff9057f251e5a8 RBP: ffffffff9ed05ca0 R08: 0000000000000000 R09: c0000000ffff7fff R10: 0000000000000001 R11: ffffa6cd54187298 R12: ffffffffa04fd528 R13: 00000000000161c0 R14: ffffffffa04fd520 R15: ffff9056440b8a00 FS: 0000000000000000(0000) GS:ffff9057f2500000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00005586dc001f44 CR3: 0000000185b40000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __warn+0xc8/0x150 ? __debug_object_init.cold.5+0x35/0x15f ? report_bug+0x113/0x140 ? do_error_trap+0xb6/0x130 ? do_invalid_op+0x46/0x60 ? __debug_object_init.cold.5+0x35/0x15f ? invalid_op+0x14/0x20 ? __debug_object_init.cold.5+0x35/0x15f ? lod_set_pool+0x270/0x270 [lod] debug_object_init+0x22/0x30 init_timer_key+0x28/0x120 lod_ost_alloc_qos+0x770/0x1c30 [lod] ? string_nocheck+0x77/0xa0 ? string+0x58/0x70 ? slab_post_alloc_hook+0x66/0x380 ? lod_qos_prep_create+0x390/0x1be0 [lod] ? __kmalloc+0x1b4/0x4a0 lod_qos_prep_create+0x1378/0x1be0 [lod] lod_prepare_create+0x204/0x460 [lod] ? osd_declare_create+0x4a2/0x7a0 [osd_ldiskfs] lod_declare_striped_create+0x270/0xf80 [lod] ? lod_sub_declare_create+0x111/0x320 [lod] lod_declare_create+0x3d4/0x9c0 [lod] mdd_declare_create_object_internal+0x107/0x4a0 [mdd] ? lod_alloc_comp_entries+0x2a7/0x650 [lod] mdd_declare_create_object.isra.25+0x55/0xc40 [mdd] mdd_declare_create+0x6a/0x6c0 [mdd] mdd_create+0x5bd/0x1d00 [mdd] ? mdt_version_save+0xa8/0x210 [mdt] mdt_reint_open+0x337c/0x3c10 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_intent_open+0x180/0x5b0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_fixup_resent+0x2e0/0x2e0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ---[ end trace c96f9a86c70b527c ]--- ODEBUG: object 00000000a986938e is on stack 0000000095a5c6fa, but NOT annotated. WARNING: CPU: 9 PID: 391117 at lib/debugobjects.c:368 __debug_object_init.cold.5+0x35/0x15f Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) ec(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix libata serio_raw dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CPU: 9 PID: 391117 Comm: mdt00_065 Kdump: loaded Tainted: G W O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:__debug_object_init.cold.5+0x35/0x15f Code: 5e 9e 48 83 05 33 38 0c 03 01 89 05 69 40 0c 03 65 48 8b 04 25 00 dd 01 00 48 8b 50 18 e8 43 87 99 ff 48 83 05 2b 38 0c 03 01 <0f> 0b 48 83 05 29 38 0c 03 01 48 83 05 29 38 0c 03 01 e9 7f ee ff RSP: 0018:ffffa6cd53d874a0 EFLAGS: 00010002 RAX: 0000000000000050 RBX: ffffa6cd53d875a8 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffff9057f245e5a8 RDI: ffff9057f245e5a8 RBP: ffffffff9ed05ca0 R08: 0000000000000000 R09: c0000000ffff7fff R10: 0000000000000001 R11: ffffa6cd53d87298 R12: ffffffffa04ec468 R13: 0000000000005100 R14: ffffffffa04ec460 R15: ffff90572ec0b690 FS: 0000000000000000(0000) GS:ffff9057f2440000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f7c9120e008 CR3: 00000002895cb000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __warn+0xc8/0x150 ? __debug_object_init.cold.5+0x35/0x15f ? report_bug+0x113/0x140 ? do_error_trap+0xb6/0x130 ? do_invalid_op+0x46/0x60 ? __debug_object_init.cold.5+0x35/0x15f ? invalid_op+0x14/0x20 ? __debug_object_init.cold.5+0x35/0x15f ? lod_set_pool+0x270/0x270 [lod] debug_object_init+0x22/0x30 init_timer_key+0x28/0x120 lod_ost_alloc_qos+0x770/0x1c30 [lod] ? string_nocheck+0x77/0xa0 ? string+0x58/0x70 ? slab_post_alloc_hook+0x66/0x380 ? lod_qos_prep_create+0x390/0x1be0 [lod] ? __kmalloc+0x1b4/0x4a0 lod_qos_prep_create+0x1378/0x1be0 [lod] lod_prepare_create+0x204/0x460 [lod] ? osd_declare_create+0x4a2/0x7a0 [osd_ldiskfs] lod_declare_striped_create+0x270/0xf80 [lod] ? lod_sub_declare_create+0x111/0x320 [lod] lod_declare_create+0x3d4/0x9c0 [lod] mdd_declare_create_object_internal+0x107/0x4a0 [mdd] ? lod_alloc_comp_entries+0x2a7/0x650 [lod] mdd_declare_create_object.isra.25+0x55/0xc40 [mdd] mdd_declare_create+0x6a/0x6c0 [mdd] mdd_create+0x5bd/0x1d00 [mdd] ? mdt_version_save+0xa8/0x210 [mdt] mdt_reint_open+0x337c/0x3c10 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_intent_open+0x180/0x5b0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_fixup_resent+0x2e0/0x2e0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ---[ end trace c96f9a86c70b527d ]--- ODEBUG: object 00000000154cf87f is on stack 00000000917db0fa, but NOT annotated. WARNING: CPU: 6 PID: 396927 at lib/debugobjects.c:368 __debug_object_init.cold.5+0x35/0x15f Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) ec(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix libata serio_raw dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CPU: 6 PID: 396927 Comm: mdt00_083 Kdump: loaded Tainted: G W O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:__debug_object_init.cold.5+0x35/0x15f Code: 5e 9e 48 83 05 33 38 0c 03 01 89 05 69 40 0c 03 65 48 8b 04 25 00 dd 01 00 48 8b 50 18 e8 43 87 99 ff 48 83 05 2b 38 0c 03 01 <0f> 0b 48 83 05 29 38 0c 03 01 48 83 05 29 38 0c 03 01 e9 7f ee ff RSP: 0018:ffffa6cd565274a0 EFLAGS: 00010006 RAX: 0000000000000050 RBX: ffffa6cd565275a8 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffff9057f239e5a8 RDI: ffff9057f239e5a8 RBP: ffffffff9ed05ca0 R08: 0000000000000000 R09: c0000000ffff7fff R10: 0000000000000001 R11: ffffa6cd56527298 R12: ffffffffa05412a8 R13: 0000000000059f40 R14: ffffffffa05412a0 R15: ffff9056d3face88 FS: 0000000000000000(0000) GS:ffff9057f2380000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fb4a08cc690 CR3: 000000019b265000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __warn+0xc8/0x150 ? __debug_object_init.cold.5+0x35/0x15f ? report_bug+0x113/0x140 ? do_error_trap+0xb6/0x130 ? do_invalid_op+0x46/0x60 ? __debug_object_init.cold.5+0x35/0x15f ? invalid_op+0x14/0x20 ? __debug_object_init.cold.5+0x35/0x15f ? lod_set_pool+0x270/0x270 [lod] debug_object_init+0x22/0x30 init_timer_key+0x28/0x120 lod_ost_alloc_qos+0x770/0x1c30 [lod] ? slab_post_alloc_hook+0x66/0x380 ? lod_qos_prep_create+0x390/0x1be0 [lod] ? __kmalloc+0x1b4/0x4a0 lod_qos_prep_create+0x1378/0x1be0 [lod] lod_prepare_create+0x204/0x460 [lod] ? osd_declare_create+0x4a2/0x7a0 [osd_ldiskfs] lod_declare_striped_create+0x270/0xf80 [lod] ? lod_sub_declare_create+0x111/0x320 [lod] lod_declare_create+0x3d4/0x9c0 [lod] mdd_declare_create_object_internal+0x107/0x4a0 [mdd] ? lod_alloc_comp_entries+0x2a7/0x650 [lod] mdd_declare_create_object.isra.25+0x55/0xc40 [mdd] mdd_declare_create+0x6a/0x6c0 [mdd] mdd_create+0x5bd/0x1d00 [mdd] ? mdt_version_save+0xa8/0x210 [mdt] mdt_reint_open+0x337c/0x3c10 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_intent_open+0x180/0x5b0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_fixup_resent+0x2e0/0x2e0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0xccd/0x23b0 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ---[ end trace c96f9a86c70b527e ]--- Lustre: 92293:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff905693c65400 x1838929411871616/t4295183309(0) o101->843f4068-2859-4f24-a27e-f0bef0f75960@0@lo:257/0 lens 376/47864 e 0 to 0 dl 1753742192 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 92293:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 3 previous similar messages Lustre: 31459:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff9056801fd080 x1838929454364800/t4295269258(0) o101->dde8d149-089a-45e1-b2bc-6108fecbfdef@0@lo:302/0 lens 376/47272 e 0 to 0 dl 1753742237 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 31459:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 4 previous similar messages LustreError: 642827:0:(tgt_grant.c:449:tgt_grant_space_left()) lustre-OST0001: cli lustre-OST0001_UUID/ffff90566da6e800 left=279552000 < tot_grant=279615424 unstable=28672 pending=28672 dirty=28672 LustreError: 644909:0:(tgt_grant.c:449:tgt_grant_space_left()) lustre-OST0001: cli lustre-OST0001_UUID/ffff90566da6e800 left=279535616 < tot_grant=279575104 unstable=0 pending=0 dirty=28672 LustreError: 644909:0:(tgt_grant.c:449:tgt_grant_space_left()) Skipped 5 previous similar messages Lustre: 14927:0:(out_handler.c:879:out_tx_end()) lustre-MDT0001-osd: error during execution of #2 from /home/green/git/lustre-release/lustre/ptlrpc/../../lustre/target/out_handler.c:562: rc = -2 LustreError: 14927:0:(out_lib.c:1168:out_tx_index_delete_undo()) lustre-MDT0001-osd: Oops, can not rollback index_delete yet: rc = -524 LustreError: 14927:0:(out_lib.c:1168:out_tx_index_delete_undo()) Skipped 2 previous similar messages LustreError: 396927:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040d:0x152a:0x0] doesn't exist!: rc = -14 LustreError: 391098:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040d:0x152a:0x0] doesn't exist!: rc = -14 LustreError: 15575:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040d:0x152a:0x0] doesn't exist!: rc = -14 LustreError: 10457:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040d:0x152a:0x0] doesn't exist!: rc = -14 Lustre: 133382:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff90565b730e00 x1838929531115776/t4295218370(0) o101->843f4068-2859-4f24-a27e-f0bef0f75960@0@lo:371/0 lens 376/47272 e 0 to 0 dl 1753742306 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 133382:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 5 previous similar messages LustreError: 392081:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040d:0x152a:0x0] doesn't exist!: rc = -14 LustreError: 392081:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 5 previous similar messages Lustre: lustre-OST0000-osc-MDT0002: update sequence from 0x2c0000400 to 0x2c0000403 Lustre: lustre-OST0002-osc-MDT0002: update sequence from 0x340000400 to 0x340000403 Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x2c0000402 to 0x2c0000404 LustreError: 16218:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040d:0x152a:0x0] doesn't exist!: rc = -14 LustreError: 16218:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 2 previous similar messages Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x300000402 to 0x300000403 Lustre: lustre-OST0001-osc-MDT0001: update sequence from 0x300000401 to 0x300000404 Lustre: lustre-OST0003-osc-MDT0002: update sequence from 0x380000400 to 0x380000403 Lustre: lustre-OST0001-osc-MDT0002: update sequence from 0x300000400 to 0x300000405 Lustre: lustre-OST0002-osc-MDT0000: update sequence from 0x340000402 to 0x340000404 LustreError: 187550:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040d:0x152a:0x0] doesn't exist!: rc = -14 LustreError: 187550:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 4 previous similar messages Lustre: lustre-OST0002-osc-MDT0001: update sequence from 0x340000401 to 0x340000405 Lustre: lustre-OST0003-osc-MDT0001: update sequence from 0x380000401 to 0x380000404 Lustre: lustre-OST0000-osc-MDT0001: update sequence from 0x2c0000401 to 0x2c0000405 Lustre: lustre-OST0003-osc-MDT0000: update sequence from 0x380000402 to 0x380000405 Lustre: 10660:0:(out_handler.c:879:out_tx_end()) lustre-MDT0000-osd: error during execution of #2 from /home/green/git/lustre-release/lustre/ptlrpc/../../lustre/target/out_handler.c:562: rc = -2 Lustre: 10660:0:(out_handler.c:879:out_tx_end()) Skipped 1 previous similar message LustreError: 10660:0:(out_lib.c:1168:out_tx_index_delete_undo()) lustre-MDT0000-osd: Oops, can not rollback index_delete yet: rc = -524 Lustre: 396840:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0002: opcode 0: before 515 < left 1051, rollback = 0 Lustre: 396840:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 9906 previous similar messages Lustre: 396840:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 6/24/0, destroy: 1/4/0 Lustre: 396840:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 10016 previous similar messages Lustre: 396840:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1051/1051/0, xattr_set: 1576/14820/0 Lustre: 396840:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 10016 previous similar messages Lustre: 396840:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 28/157/0, punch: 0/0/0, quota 1/3/0 Lustre: 396840:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 10016 previous similar messages Lustre: 396840:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 7/118/0, delete: 2/5/0 Lustre: 396840:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 10016 previous similar messages Lustre: 396840:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 1/1/0, ref_del: 2/2/1 Lustre: 396840:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 10015 previous similar messages Lustre: 15596:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff9056830c0700 x1838929669039744/t4295200995(0) o101->dde8d149-089a-45e1-b2bc-6108fecbfdef@0@lo:466/0 lens 376/48232 e 0 to 0 dl 1753742401 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 15596:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 18 previous similar messages LustreError: 13894:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040d:0x152a:0x0] doesn't exist!: rc = -14 LustreError: 13894:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 26 previous similar messages LustreError: 7307:0:(out_lib.c:1168:out_tx_index_delete_undo()) lustre-MDT0000-osd: Oops, can not rollback index_delete yet: rc = -524 Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x300000403 to 0x300000406 Lustre: 123070:0:(out_handler.c:879:out_tx_end()) lustre-MDT0001-osd: error during execution of #2 from /home/green/git/lustre-release/lustre/ptlrpc/../../lustre/target/out_handler.c:532: rc = -17 Lustre: 123070:0:(out_handler.c:879:out_tx_end()) Skipped 1 previous similar message LustreError: 123070:0:(out_lib.c:1168:out_tx_index_delete_undo()) lustre-MDT0001-osd: Oops, can not rollback index_delete yet: rc = -524 Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x2c0000404 to 0x2c0000406 LustreError: 13860:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0002: [0x280000408:0x35d8:0x0] doesn't exist!: rc = -14 LustreError: 13860:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 115 previous similar messages LustreError: 513174:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 19 [0x0:0x0:0x0] inode@0000000000000000: rc = -1 LustreError: 513174:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 92 previous similar messages LustreError: 605168:0:(out_lib.c:1168:out_tx_index_delete_undo()) lustre-MDT0000-osd: Oops, can not rollback index_delete yet: rc = -524 Lustre: lustre-OST0000-osc-MDT0002: update sequence from 0x2c0000403 to 0x2c0000407 Lustre: lustre-OST0001-osc-MDT0001: update sequence from 0x300000404 to 0x300000407 Lustre: lustre-OST0002-osc-MDT0000: update sequence from 0x340000404 to 0x340000406 Lustre: lustre-OST0003-osc-MDT0000: update sequence from 0x380000405 to 0x380000406 Lustre: lustre-OST0002-osc-MDT0002: update sequence from 0x340000403 to 0x340000407 Lustre: 16218:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff905792a92300 x1838929909591936/t4295302242(0) o101->843f4068-2859-4f24-a27e-f0bef0f75960@0@lo:628/0 lens 376/48056 e 0 to 0 dl 1753742563 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 16218:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 33 previous similar messages Lustre: lustre-OST0002-osc-MDT0001: update sequence from 0x340000405 to 0x340000408 Lustre: lustre-OST0003-osc-MDT0001: update sequence from 0x380000404 to 0x380000407 Lustre: lustre-OST0003-osc-MDT0002: update sequence from 0x380000403 to 0x380000408 Lustre: 74676:0:(out_handler.c:879:out_tx_end()) lustre-MDT0001-osd: error during execution of #2 from /home/green/git/lustre-release/lustre/ptlrpc/../../lustre/target/out_handler.c:532: rc = -17 Lustre: 74676:0:(out_handler.c:879:out_tx_end()) Skipped 1 previous similar message LustreError: 74676:0:(out_lib.c:1168:out_tx_index_delete_undo()) lustre-MDT0001-osd: Oops, can not rollback index_delete yet: rc = -524 Lustre: lustre-OST0001-osc-MDT0002: update sequence from 0x300000405 to 0x300000408 Lustre: lustre-OST0000-osc-MDT0001: update sequence from 0x2c0000405 to 0x2c0000408 Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x300000406 to 0x300000409 Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x2c0000406 to 0x2c0000409 Lustre: lustre-OST0000-osc-MDT0002: update sequence from 0x2c0000407 to 0x2c000040a LustreError: 459128:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040d:0x152a:0x0] doesn't exist!: rc = -14 LustreError: 459128:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 350 previous similar messages Lustre: lustre-OST0002-osc-MDT0002: update sequence from 0x340000407 to 0x340000409 Lustre: lustre-OST0003-osc-MDT0002: update sequence from 0x380000408 to 0x380000409 | Link to test |
racer test 2: racer rename: onyx-146vm7.onyx.whamcloud.com,onyx-146vm8 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 835011 Comm: ll_sa_834868 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.58.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 23 6c 63 5e 5b c3 cc cc cc cc 48 89 df e8 85 0a af ff 39 05 73 90 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffa9af42b43e08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000010000d RDX: 000000000010000e RSI: ffff9c244ae9b370 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9c244cb18200 R11: 0000000000001700 R12: ffff9c244ae9b090 R13: ffff9c244cb18298 R14: ffff9c244cb18200 R15: ffff9c244cb182a8 FS: 0000000000000000(0000) GS:ffff9c253bc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000004fe10004 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd intel_rapl_msr grace fscache intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr joydev virtio_balloon i2c_piix4 sunrpc ata_generic ext4 mbcache jbd2 ata_piix libata crc32c_intel serio_raw virtio_blk virtio_net net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Autotest: Test running for 225 minutes (lustre-reviews_review-dne-part-9_115173.34) | Link to test |
racer test 1: racer on clients: centos-95.localnet DURATION=2700 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 4 PID: 69247 Comm: ll_sa_69214 Kdump: loaded Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 e1 cd 87 ff 48 83 05 a9 53 ce 02 01 39 05 67 34 75 01 77 cf 48 83 05 a9 53 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 b4 53 ce 02 01 83 f8 01 74 2b 48 83 05 b7 53 ce 02 RSP: 0018:ffffac068ecd7e90 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000080200016 RDX: 0000000080200017 RSI: ffffa0acf2e36f88 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffffa0ad04031200 R11: 0000000000000000 R12: ffffa0acf2e36f40 R13: ffffa0ad040312b8 R14: ffffa0acf2e36c08 R15: ffffa0acf2e36f88 FS: 0000000000000000(0000) GS:ffffa0ae72300000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000017a298000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) pcspkr virtio_balloon i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CR2: 0000000000000008 | Lustre: 6065:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6065:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6065:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6065:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6065:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6065:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8247:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffffa0ace91a8700 x1838349611190656/t4294968345(0) o101->25c29df3-9735-4086-83f4-816a6ef534ae@0@lo:613/0 lens 376/864 e 0 to 0 dl 1753186868 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 19[12847]: segfault at 0 ip 000055b9ea807b47 sp 00007ffd1938ed70 error 6 in 19[55b9ea803000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 6066:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6066:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 6066:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6066:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6066:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6066:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6066:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6066:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6066:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6066:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6066:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6066:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8874:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 516 < left 618, rollback = 7 Lustre: 8874:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 8874:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8874:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8874:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8874:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8874:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 8874:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8874:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8874:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8874:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8874:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8587:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 8587:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 8587:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8587:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8587:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8587:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8587:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 8587:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8587:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8587:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8587:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8587:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8308:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 8308:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 8308:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8308:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8308:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8308:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8308:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 8308:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8308:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8308:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 8308:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8308:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message LustreError: 24347:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffffa0acda000000: inode [0x200000401:0x5ae:0x0] mdc close failed: rc = -13 Lustre: 6066:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 515 < left 618, rollback = 7 Lustre: 6066:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 5 previous similar messages Lustre: 6066:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6066:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 6066:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 6066:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 6066:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6066:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 6066:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6066:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 6066:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6066:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 12813:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x738:0x0] with magic=0xbd60bd0 Lustre: 6065:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6065:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 15 previous similar messages Lustre: 6065:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6065:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: 6065:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6065:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: 6065:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 6065:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: 6065:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6065:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: 6065:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6065:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 15 previous similar messages 16[35201]: segfault at 0 ip 000056144eef9b47 sp 00007ffeb44181c0 error 6 in 1[56144eef5000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 6064:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6064:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 31 previous similar messages Lustre: 6064:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6064:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 31 previous similar messages Lustre: 6064:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6064:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 31 previous similar messages Lustre: 6064:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/78/0 Lustre: 6064:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 31 previous similar messages Lustre: 6064:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6064:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 31 previous similar messages Lustre: 6064:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6064:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 31 previous similar messages Lustre: 11745:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0xce7:0x0] with magic=0xbd60bd0 Lustre: 11745:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 2[47404]: segfault at 8 ip 00007ff4d378d875 sp 00007fff889d1000 error 4 in ld-2.28.so[7ff4d376c000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 51932:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffffa0acda000000: inode [0x200000401:0xeb7:0x0] mdc close failed: rc = -13 18[52069]: segfault at 8 ip 00007f4ea73d0875 sp 00007ffc8841e480 error 4 in ld-2.28.so[7f4ea73af000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 12816:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0x10d2:0x0] with magic=0xbd60bd0 Lustre: 12816:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message task:mdt00_020 state:I Lustre: mdt00_004: service thread pid 6899 was inactive for 43.309 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt00_019 state:I Lustre: Skipped 2 previous similar messages stack:0 pid:11745 ppid:2 flags:0x80004080 Call Trace: Lustre: mdt00_002: service thread pid 5787 was inactive for 43.347 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] stack:0 pid:12810 ppid:2 flags:0x80004080 ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] mdt_getattr_name_lock+0x274f/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] task:mdt00_004 state:I stack:0 pid:6899 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] schedule+0xc0/0x180 tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] schedule_timeout+0xb4/0x190 ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] schedule_timeout+0xb4/0x190 kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? __next_timer_interrupt+0x160/0x160 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? mdt_obd_postrecov+0x100/0x100 [mdt] ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] mdt_getattr_name_lock+0x274f/0x3350 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_intent_policy+0x14b/0x670 [mdt] mdt_object_lock+0x9e/0x240 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 mdt_reint_link+0xa39/0x10d0 [mdt] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_main+0xd30/0x1450 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 5775:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffffa0acac61da00/0x7d358bcc0cb74e49 lrc: 3/0,0 mode: PR/PR res: [0x200000402:0x13a1:0x0].0x0 bits 0x1b/0x0 rrc: 11 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x7d358bcc0cb74e2d expref: 599 pid: 8247 timeout: 368 lvb_type: 0 Lustre: mdt00_018: service thread pid 11650 completed after 100.823s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_009: service thread pid 8247 completed after 100.803s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_017: service thread pid 11645 completed after 100.899s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_016: service thread pid 11642 completed after 100.698s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 8573:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 00000000868a791c ns: mdt-lustre-MDT0000_UUID lock: ffffa0acf8435c00/0x7d358bcc0cb750b1 lrc: 3/0,0 mode: PR/PR res: [0x200000402:0x1219:0x0].0x0 bits 0x13/0x0 rrc: 5 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0x7d358bcc0cb75072 expref: 447 pid: 8573 timeout: 0 lvb_type: 0 Lustre: mdt00_013: service thread pid 8573 completed after 100.882s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffffa0acda2b6000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffffa0acda2b6000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: mdt00_002: service thread pid 5787 completed after 100.703s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_020: service thread pid 12810 completed after 100.681s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_004: service thread pid 6899 completed after 100.666s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffffa0acda2b6000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. Lustre: mdt00_019: service thread pid 11745 completed after 100.679s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 5787:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffffa0acf1468700 x1838349652342784/t0(0) o104->lustre-MDT0000@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 65033:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000402:0x1219:0x0] error: rc = -5 LustreError: 65298:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000402:0x13a1:0x0] error: rc = -5 LustreError: 65546:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffffa0acda2b6000: inode [0x200000401:0x1220:0x0] mdc close failed: rc = -108 LustreError: 65546:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message LustreError: 65033:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 11 previous similar messages Lustre: lustre-MDT0000-mdc-ffffa0acda2b6000: Connection restored to (at 0@lo) Lustre: 8587:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 8587:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 49 previous similar messages Lustre: 8587:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8587:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 49 previous similar messages Lustre: 8587:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8587:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 49 previous similar messages Lustre: 8587:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 8587:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 49 previous similar messages Lustre: 8587:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8587:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 49 previous similar messages Lustre: 8587:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8587:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 49 previous similar messages LustreError: 5775:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 104s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffffa0ac7e526600/0x7d358bcc0cb87695 lrc: 3/0,0 mode: PR/PR res: [0x200000402:0x12f9:0x0].0x0 bits 0x13/0x0 rrc: 9 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0x7d358bcc0cb87664 expref: 79 pid: 11745 timeout: 472 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffffa0acda2b6000: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffffa0acda2b6000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 1 previous similar message LustreError: 19183:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1753187245 with bad export cookie 9022271137009590133 LustreError: lustre-MDT0000-mdc-ffffa0acda2b6000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 66598:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 67041:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffffa0acda2b6000: inode [0x200000401:0x1220:0x0] mdc close failed: rc = -108 LustreError: 67041:0:(file.c:248:ll_close_inode_openhandle()) Skipped 5 previous similar messages Lustre: lustre-MDT0000-mdc-ffffa0acda2b6000: Connection restored to (at 0@lo) | Link to test |
racer test 1: racer on clients: centos-50.localnet DURATION=2700 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 6 PID: 290176 Comm: ll_sa_290064 Kdump: loaded Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 e1 cd 87 ff 48 83 05 a9 53 ce 02 01 39 05 67 34 75 01 77 cf 48 83 05 a9 53 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 b4 53 ce 02 01 83 f8 01 74 2b 48 83 05 b7 53 ce 02 RSP: 0018:ffffa3f69f257e90 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008020001e RDX: 000000008020001f RSI: ffff968a7d47ef88 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff968a60c9ac00 R11: ffffffffffffffff R12: ffff968a7d47ef40 R13: ffff968a60c9acb8 R14: ffff968a7d47ec08 R15: ffff968a7d47ef88 FS: 0000000000000000(0000) GS:ffff968bf2380000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000001a796f000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? update_load_avg+0x9f/0xa40 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) i2c_piix4 virtio_balloon pcspkr rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CR2: 0000000000000008 | Lustre: 6733:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0000: opcode 2: before 510 < left 610, rollback = 2 Lustre: 6733:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/2, destroy: 0/0/0 Lustre: 6733:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 8/610/0 Lustre: 6733:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 1/3/0 Lustre: 6733:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 11/239/4, delete: 0/0/0 Lustre: 6733:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 5/5/0, ref_del: 0/0/0 Lustre: 9863:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff968a2f73a300 x1838278632681856/t4294967479(0) o101->070aa2f4-d916-4640-a923-4e9d0cf8eeda@0@lo:123/0 lens 376/864 e 0 to 0 dl 1753119183 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 6733:0:(mdd_dir.c:4838:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/11 is open, migrate only dentry Lustre: 11260:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0002: opcode 2: before 514 < left 610, rollback = 2 Lustre: 11260:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 5 previous similar messages Lustre: 11260:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/0, destroy: 0/0/0 Lustre: 11260:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 11260:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 8/610/0 Lustre: 11260:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 11260:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 1/3/0 Lustre: 11260:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 11260:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 12/275/2, delete: 0/0/0 Lustre: 11260:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 11260:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 6/6/0, ref_del: 0/0/0 Lustre: 11260:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 11694:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff968a54359180 x1838278633766528/t4294967865(0) o101->070aa2f4-d916-4640-a923-4e9d0cf8eeda@0@lo:126/0 lens 376/864 e 0 to 0 dl 1753119186 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: mdt00_018: service thread pid 10743 was inactive for 42.155 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Lustre: mdt00_017: service thread pid 10664 was inactive for 42.448 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: mdt00_007: service thread pid 9739 was inactive for 42.455 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. task:mdt_io00_002 state:I task:mdt_io00_008 state:I stack:0 pid:12108 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_fini+0xadc/0x1500 [ptlrpc] ldlm_cli_enqueue+0x47f/0xe40 [ptlrpc] Lustre: Skipped 2 previous similar messages ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_object_sync.isra.38+0x610/0x610 [mdt] osp_md_object_lock+0x219/0x3f0 [osp] lod_object_lock+0x18b/0xa20 [lod] mdd_object_lock+0x3d/0x110 [mdd] mdt_remote_object_lock_try+0x112/0x3d0 [mdt] mdt_object_lock_internal+0x118/0x5a0 [mdt] mdt_rename_lock+0x1e5/0x460 [mdt] mdt_reint_migrate+0xa3d/0x23c0 [mdt] ? lustre_msg_add_version+0x29/0xd0 [ptlrpc] stack:0 pid:6733 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 ? lustre_pack_reply_v2+0x282/0x380 [ptlrpc] ? lu_ucred+0x25/0x40 [obdclass] ? mdt_ucred+0x19/0x30 [mdt] ? mdt_root_squash+0x26/0x5a0 [mdt] ? ucred_set_audit_enabled.isra.12+0x28/0xa0 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] task:mdt00_018 state:I ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 schedule+0xc0/0x180 stack:0 pid:10743 ppid:2 flags:0x80004080 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] Call Trace: ? woken_wake_function+0x30/0x30 __schedule+0x351/0xcb0 ldlm_cli_enqueue_fini+0xadc/0x1500 [ptlrpc] schedule+0xc0/0x180 ldlm_cli_enqueue+0x47f/0xe40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_object_sync.isra.38+0x610/0x610 [mdt] osp_md_object_lock+0x219/0x3f0 [osp] lod_object_lock+0x18b/0xa20 [lod] ? lu_object_find_at+0x5bc/0xb80 [obdclass] ? lod_lookup+0x1b/0x30 [lod] mdd_object_lock+0x3d/0x110 [mdd] mdt_remote_object_lock_try+0x112/0x3d0 [mdt] mdt_object_pdo_lock+0x335/0x910 [mdt] ? mdt_migrate_lookup.isra.22+0x781/0x17f0 [mdt] mdt_parent_lock+0x8f/0x370 [mdt] schedule_timeout+0xb4/0x190 mdt_reint_migrate+0xed1/0x23c0 [mdt] ? mdt_ucred+0x19/0x30 [mdt] ? ucred_set_audit_enabled.isra.12+0x28/0xa0 [mdt] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] ? __next_timer_interrupt+0x160/0x160 tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] mdt_getattr_name_lock+0x274f/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Lustre: mdt_io00_011: service thread pid 12727 was inactive for 43.265 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 25 previous similar messages Lustre: mdt_out00_000: service thread pid 6723 was inactive for 74.751 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. LustreError: 6709:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9689cbb3d600/0xfd07213738d472aa lrc: 3/0,0 mode: PR/PR res: [0x240000403:0x3a:0x0].0x0 bits 0x1/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xfd07213738d4707a expref: 117 pid: 6718 timeout: 192 lvb_type: 0 Lustre: mdt_out00_000: service thread pid 6723 completed after 74.772s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_011: service thread pid 12727 completed after 100.633s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 10050:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff968a8c350380 x1838278634387840/t0(0) o104->lustre-MDT0001@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 Lustre: mdt00_005: service thread pid 8857 completed after 104.096s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_out00_003: service thread pid 7255 completed after 103.924s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_015: service thread pid 10050 completed after 104.056s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 7527:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1753119277 with bad export cookie 18232578137492990155 Lustre: lustre-MDT0001-mdc-ffff968a716c2000: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0001-mdc-ffff968a716c2000: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. Lustre: mdt_out00_002: service thread pid 7252 completed after 103.977s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_011: service thread pid 9863 completed after 104.068s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_031: service thread pid 11360 completed after 103.593s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_018: service thread pid 10743 completed after 103.649s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 10664:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 0000000020cd7e59 ns: mdt-lustre-MDT0001_UUID lock: ffff968aa94f0000/0xfd07213738d38281 lrc: 3/0,0 mode: PR/PR res: [0x240000403:0x1:0x0].0x0 bits 0x12/0x0 rrc: 20 type: IBT gid 0 flags: 0x50200400000020 nid: 0@lo remote: 0xfd07213738d3822d expref: 21 pid: 10664 timeout: 0 lvb_type: 0 Lustre: mdt00_017: service thread pid 10664 completed after 103.943s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_007: service thread pid 9739 completed after 103.952s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_010: service thread pid 9784 completed after 103.854s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_006: service thread pid 9668 completed after 103.871s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_025: service thread pid 11134 completed after 103.908s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0001-mdc-ffff968a716c2000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: mdt00_026: service thread pid 11209 completed after 103.950s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_009: service thread pid 9761 completed after 103.757s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_016: service thread pid 10054 completed after 103.599s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_029: service thread pid 11304 completed after 103.872s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_028: service thread pid 11257 completed after 104.015s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_013: service thread pid 10007 completed after 103.869s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_002: service thread pid 6720 completed after 103.693s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_027: service thread pid 11230 completed after 103.876s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_023: service thread pid 11056 completed after 103.977s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 11571:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x240000403:0x1:0x0] error: rc = -5 LustreError: 11577:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0001-mdc-ffff968a716c2000: [0x240000403:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 11577:0:(statahead.c:1807:is_first_dirent()) lustre: reading dir [0x240000403:0x1:0x0] at 0 stat_pid = 0 : rc = -108 LustreError: 11577:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff968a716c2000: inode [0x240000404:0x5:0x0] mdc close failed: rc = -108 LustreError: 13116:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0001-mdc-ffff968a716c2000: namespace resource [0x200000403:0x1:0x0].0x0 (ffff968a820eb600) refcount nonzero (1) after lock cleanup; forcing cleanup. LustreError: lustre-MDT0000-mdc-ffff968a716c2000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 11917:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -108 Lustre: 6733:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0000: opcode 2: before 506 < left 788, rollback = 2 LustreError: 11917:0:(llite_lib.c:2039:ll_md_setattr()) Skipped 1 previous similar message Lustre: 6733:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 11 previous similar messages Lustre: 6733:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/7, destroy: 0/0/0 Lustre: 6733:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 6733:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 10/788/0 Lustre: 6733:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 6733:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 4/54/0 Lustre: 6733:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 6733:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 10/203/3, delete: 0/0/0 Lustre: 6733:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 6733:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 4/4/0, ref_del: 0/0/0 Lustre: 6733:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: lustre-MDT0000-mdc-ffff968a716c2000: Connection restored to (at 0@lo) Lustre: mdt_io00_002: service thread pid 6733 completed after 104.063s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_000: service thread pid 6731 completed after 104.426s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_004: service thread pid 11260 completed after 104.303s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_005: service thread pid 11872 completed after 104.333s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_006: service thread pid 11976 completed after 104.428s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_007: service thread pid 12061 completed after 104.780s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: 12108:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0002: opcode 2: before 513 < left 699, rollback = 2 Lustre: 12108:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 9 previous similar messages Lustre: 12108:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/1, destroy: 0/0/0 Lustre: 12108:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 12108:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 9/699/0 Lustre: 12108:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 12108:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 1/3/0 Lustre: 12108:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 12108:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 12/275/2, delete: 0/0/0 Lustre: 12108:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: 12108:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 6/6/0, ref_del: 0/0/0 Lustre: 12108:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 9 previous similar messages Lustre: mdt_io00_008: service thread pid 12108 completed after 104.565s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: 12464:0:(mdd_dir.c:4838:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x2:0x0]/9 is open, migrate only dentry Lustre: mdt_io00_009: service thread pid 12464 completed after 104.610s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt_io00_010: service thread pid 12520 completed after 104.850s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: 10841:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 12336:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 12336:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message LustreError: 9955:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0000: failed to get lu_attr of [0x200000404:0x1a:0x0]: rc = -2 LustreError: 10308:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff968a716c2000: inode [0x200000404:0x1a:0x0] mdc close failed: rc = -2 LustreError: 10308:0:(file.c:248:ll_close_inode_openhandle()) Skipped 6 previous similar messages LustreError: 11257:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0002: [0x280000404:0x33:0x0] migrate mdt count mismatch 2 != 3 Lustre: 10587:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 10587:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 10587:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 10587:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 111 previous similar messages Lustre: 10587:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 10587:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 111 previous similar messages Lustre: 10587:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 10587:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 111 previous similar messages Lustre: 10587:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 10587:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 111 previous similar messages Lustre: 10587:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 10587:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 111 previous similar messages INFO: task mcreate:12200 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mcreate state:D stack:0 pid:12200 ppid:9377 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? __might_sleep+0x59/0xc0 down_write+0x80/0xd0 filename_create+0x92/0x220 do_mknodat+0x105/0x300 __x64_sys_mknod+0x23/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7fea7fa64041 Code: Unable to access opcode bytes at RIP 0x7fea7fa64017. RSP: 002b:00007ffd789fa628 EFLAGS: 00000246 ORIG_RAX: 0000000000000085 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fea7fa64041 RDX: 0000000000000000 RSI: 00000000000081a4 RDI: 00007ffd789fcce7 RBP: 00007ffd789fcce7 R08: 00007ffd789fcce7 R09: 0000000000000000 R10: fffffffffffff5cb R11: 0000000000000246 R12: 0000000000000001 R13: 00007ffd789fa818 R14: 00007ffd789fa650 R15: fffff00000000000 INFO: task lfs:12220 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:lfs state:D stack:0 pid:12220 ppid:9366 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? lprocfs_counter_add+0x15b/0x210 [obdclass] down_write+0x80/0xd0 do_last+0x2eb/0xfc0 ? nd_jump_root+0xe5/0x160 ? path_init+0x437/0x520 path_openat+0xf7/0x500 do_filp_open+0x99/0x140 ? getname_flags+0x6e/0x330 ? __check_object_size+0xff/0x256 ? do_raw_spin_unlock+0x75/0x190 ? _raw_spin_unlock+0x12/0x30 do_sys_openat2+0x2b4/0x410 do_sys_open+0x73/0xa0 __x64_sys_openat+0x24/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7f24a35dcebf Code: Unable to access opcode bytes at RIP 0x7f24a35dce95. RSP: 002b:00007ffc607a8f60 EFLAGS: 00000246 ORIG_RAX: 0000000000000101 RAX: ffffffffffffffda RBX: 00000000020762e0 RCX: 00007f24a35dcebf RDX: 0000000000002141 RSI: 00007ffc607b7cee RDI: 00000000ffffff9c RBP: 00000000020762b0 R08: 00000000020762e0 R09: 0000000002075012 R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000020000 R13: 00007ffc607b7cee R14: 0000000000000000 R15: 0000000000000000 INFO: task ln:12222 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:ln state:D stack:0 pid:12222 ppid:9370 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? __might_sleep+0x59/0xc0 down_write+0x80/0xd0 filename_create+0x92/0x220 do_linkat+0xc1/0x540 ? syscall_trace_enter+0x206/0x450 __x64_sys_linkat+0x28/0x40 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7ff517a5801e Code: Unable to access opcode bytes at RIP 0x7ff517a57ff4. RSP: 002b:00007fff95d16468 EFLAGS: 00000246 ORIG_RAX: 0000000000000109 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff517a5801e RDX: 00000000ffffff9c RSI: 00007fff95d17d33 RDI: 00000000ffffff9c RBP: 00000000ffffff9c R08: 0000000000000000 R09: 0000000000000000 R10: 00007fff95d17d49 R11: 0000000000000246 R12: 00007fff95d17d49 R13: 0000000000000000 R14: 00007fff95d17d33 R15: 00000000ffffff9c INFO: task fallocate:12242 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:fallocate state:D stack:0 pid:12242 ppid:9552 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? lprocfs_counter_add+0x15b/0x210 [obdclass] down_write+0x80/0xd0 do_last+0x2eb/0xfc0 ? nd_jump_root+0xe5/0x160 ? path_init+0x437/0x520 path_openat+0xf7/0x500 do_filp_open+0x99/0x140 ? getname_flags+0x6e/0x330 ? __check_object_size+0xff/0x256 ? do_raw_spin_unlock+0x75/0x190 ? _raw_spin_unlock+0x12/0x30 do_sys_openat2+0x2b4/0x410 do_sys_open+0x73/0xa0 __x64_sys_openat+0x24/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7fbcbd2ef332 Code: Unable to access opcode bytes at RIP 0x7fbcbd2ef308. RSP: 002b:00007ffd4c4c1820 EFLAGS: 00000246 ORIG_RAX: 0000000000000101 RAX: ffffffffffffffda RBX: 0000563f2a6e9ca0 RCX: 00007fbcbd2ef332 RDX: 0000000000000042 RSI: 00007ffd4c4c3d3a RDI: 00000000ffffff9c RBP: 0000000000000006 R08: 0000000000000000 R09: 0000000000000000 R10: 00000000000001b6 R11: 0000000000000246 R12: 00007ffd4c4c1a98 R13: 0000000000000000 R14: 00007ffd4c4c18d8 R15: 0000000000007815 INFO: task file_concat.sh:12244 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:file_concat.sh state:D stack:0 pid:12244 ppid:9493 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? lprocfs_counter_add+0x15b/0x210 [obdclass] down_write+0x80/0xd0 do_last+0x2eb/0xfc0 ? nd_jump_root+0xe5/0x160 ? path_init+0x437/0x520 path_openat+0xf7/0x500 do_filp_open+0x99/0x140 ? getname_flags+0x6e/0x330 ? __check_object_size+0xff/0x256 ? do_raw_spin_unlock+0x75/0x190 ? _raw_spin_unlock+0x12/0x30 do_sys_openat2+0x2b4/0x410 do_sys_open+0x73/0xa0 __x64_sys_openat+0x24/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7faaf5fe7332 Code: Unable to access opcode bytes at RIP 0x7faaf5fe7308. RSP: 002b:00007ffeb2e0c220 EFLAGS: 00000246 ORIG_RAX: 0000000000000101 RAX: ffffffffffffffda RBX: 000055f0c32e4e50 RCX: 00007faaf5fe7332 RDX: 0000000000000441 RSI: 000055f0c32e7bd0 RDI: 00000000ffffff9c RBP: 00007ffeb2e0c320 R08: 0000000000000020 R09: 000055f0c32cb010 R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000000003 R13: 0000000000000001 R14: 0000000000000001 R15: 000055f0c32e7bd0 INFO: task ln:12258 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:ln state:D stack:0 pid:12258 ppid:9491 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? __might_sleep+0x59/0xc0 down_write+0x80/0xd0 filename_create+0x92/0x220 do_symlinkat+0x8d/0x170 __x64_sys_symlinkat+0x1e/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7ff2e3a9807b Code: Unable to access opcode bytes at RIP 0x7ff2e3a98051. RSP: 002b:00007ffed3a3b158 EFLAGS: 00000246 ORIG_RAX: 000000000000010a RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff2e3a9807b RDX: 00007ffed3a3cd49 RSI: 00000000ffffff9c RDI: 00007ffed3a3cd46 RBP: 00000000ffffff9c R08: 000000000000ffff R09: 0000000000000000 R10: 00007ff2e3a939c0 R11: 0000000000000246 R12: 00007ffed3a3cd49 R13: 00007ffed3a3cd46 R14: 0000000000000000 R15: 0000000000000000 INFO: task mcreate:12281 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mcreate state:D stack:0 pid:12281 ppid:9498 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? __might_sleep+0x59/0xc0 down_write+0x80/0xd0 filename_create+0x92/0x220 do_mknodat+0x105/0x300 __x64_sys_mknod+0x23/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7faaf9456041 Code: Unable to access opcode bytes at RIP 0x7faaf9456017. RSP: 002b:00007ffe88f04c38 EFLAGS: 00000246 ORIG_RAX: 0000000000000085 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007faaf9456041 RDX: 0000000000000000 RSI: 00000000000081a4 RDI: 00007ffe88f05ce6 RBP: 00007ffe88f05ce6 R08: 00007ffe88f05ce6 R09: 0000000000000000 R10: fffffffffffff5cb R11: 0000000000000246 R12: 0000000000000001 R13: 00007ffe88f04e28 R14: 00007ffe88f04c60 R15: fffff00000000000 INFO: task mkdir:12324 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mkdir state:D stack:0 pid:12324 ppid:9367 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? __might_sleep+0x59/0xc0 down_write+0x80/0xd0 filename_create+0x92/0x220 do_mkdirat+0x74/0x160 __x64_sys_mkdir+0x1f/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7fb88625ed4b Code: Unable to access opcode bytes at RIP 0x7fb88625ed21. RSP: 002b:00007ffc76f9eeb8 EFLAGS: 00000246 ORIG_RAX: 0000000000000053 RAX: ffffffffffffffda RBX: 00007ffc76f9fd51 RCX: 00007fb88625ed4b RDX: 00007ffc76f9f110 RSI: 00000000000001ff RDI: 00007ffc76f9fd51 RBP: 00007ffc76f9f110 R08: 00000000000001ff R09: 000055ead0ed3270 R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffc76f9fd3e R13: 00007ffc76f9fd51 R14: 000055ead0ed3290 R15: 0000000000000001 INFO: task mcreate:12356 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mcreate state:D stack:0 pid:12356 ppid:9432 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? __might_sleep+0x59/0xc0 down_write+0x80/0xd0 filename_create+0x92/0x220 do_mknodat+0x105/0x300 __x64_sys_mknod+0x23/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7f17b0e9f041 Code: Unable to access opcode bytes at RIP 0x7f17b0e9f017. RSP: 002b:00007ffc3cd8e6a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000085 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f17b0e9f041 RDX: 0000000000000000 RSI: 00000000000081a4 RDI: 00007ffc3cd8fce6 RBP: 00007ffc3cd8fce6 R08: 00007ffc3cd8fce6 R09: 0000000000000000 R10: fffffffffffff5cb R11: 0000000000000246 R12: 0000000000000001 R13: 00007ffc3cd8e898 R14: 00007ffc3cd8e6d0 R15: fffff00000000000 INFO: task file_concat.sh:12371 blocked for more than 120 seconds. Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:file_concat.sh state:D stack:0 pid:12371 ppid:9373 flags:0x80000080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_preempt_disabled+0x21/0x40 rwsem_down_write_slowpath+0x5d7/0xa40 ? lprocfs_counter_add+0x15b/0x210 [obdclass] down_write+0x80/0xd0 do_last+0x2eb/0xfc0 ? nd_jump_root+0xe5/0x160 ? path_init+0x437/0x520 path_openat+0xf7/0x500 do_filp_open+0x99/0x140 ? getname_flags+0x6e/0x330 ? __check_object_size+0xff/0x256 ? do_raw_spin_unlock+0x75/0x190 ? _raw_spin_unlock+0x12/0x30 do_sys_openat2+0x2b4/0x410 do_sys_open+0x73/0xa0 __x64_sys_openat+0x24/0x30 do_syscall_64+0xc1/0x3f0 entry_SYSCALL_64_after_hwframe+0x49/0xae RIP: 0033:0x7fab8e630332 Code: Unable to access opcode bytes at RIP 0x7fab8e630308. RSP: 002b:00007ffe179a5500 EFLAGS: 00000246 ORIG_RAX: 0000000000000101 RAX: ffffffffffffffda RBX: 000055777ddeec40 RCX: 00007fab8e630332 RDX: 0000000000000441 RSI: 000055777ddef780 RDI: 00000000ffffff9c RBP: 00007ffe179a5600 R08: 0000000000000020 R09: 000055777ddce010 R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000000003 R13: 0000000000000001 R14: 0000000000000001 R15: 000055777ddef780 LustreError: 6709:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9689d4529600/0xfd07213738d56a3c lrc: 3/0,0 mode: PR/PR res: [0x200000405:0x5:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xfd07213738d56a20 expref: 110 pid: 9863 timeout: 297 lvb_type: 0 LustreError: 6709:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 1 previous similar message LustreError: 6704:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1753119384 with bad export cookie 18232578137492989910 LustreError: lustre-MDT0000-mdc-ffff968a69e5e800: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff968a69e5e800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff968a69e5e800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 14209:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000405:0x5:0x0] error: rc = -5 LustreError: Skipped 10 previous similar messages LustreError: 10836:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff968a69e5e800: [0x200000400:0x4:0x0] lock enqueue fails: rc = -108 LustreError: 10836:0:(mdc_request.c:1477:mdc_read_page()) Skipped 2 previous similar messages Lustre: dir [0x200000404:0x17:0x0] stripe 0 readdir failed: -108, directory is partially accessed! LustreError: 13846:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff968a69e5e800: inode [0x200000405:0x5:0x0] mdc close failed: rc = -5 LustreError: 14209:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 28 previous similar messages LustreError: 13846:0:(file.c:248:ll_close_inode_openhandle()) Skipped 2 previous similar messages LustreError: 13846:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000405:0x5:0x0] error -108. Lustre: lustre-MDT0000-mdc-ffff968a69e5e800: Connection restored to (at 0@lo) Lustre: Skipped 1 previous similar message Lustre: 13532:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0000: opcode 2: before 505 < left 788, rollback = 2 Lustre: 13532:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 107 previous similar messages Lustre: 13532:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/9, destroy: 0/0/0 Lustre: 13532:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 13532:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 10/788/0 Lustre: 13532:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 13532:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 4/54/0 Lustre: 13532:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 13532:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 12/275/2, delete: 0/0/0 Lustre: 13532:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 13532:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 6/6/0, ref_del: 0/0/0 Lustre: 13532:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message LustreError: 6732:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0000: migrate [0x280000404:0x33:0x0]/18 failed: rc = -116 Lustre: 6731:0:(mdd_dir.c:4838:mdd_migrate_object()) lustre-MDD0001: [0x240000403:0x1:0x0]/10 is open, migrate only dentry Lustre: 13094:0:(mdd_dir.c:4838:mdd_migrate_object()) lustre-MDD0000: [0x200000403:0x1:0x0]/0 is open, migrate only dentry LustreError: 10481:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0000: failed to get lu_attr of [0x200000404:0x17:0x0]: rc = -2 LustreError: 13094:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x1:0x0]/0 failed: rc = -2 LustreError: 11163:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff968a69e5e800: inode [0x200000404:0x17:0x0] mdc close failed: rc = -2 LustreError: 11163:0:(file.c:248:ll_close_inode_openhandle()) Skipped 3 previous similar messages LustreError: 16725:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000405:0x59:0x0]: rc = -5 LustreError: 16725:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 Lustre: 7517:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 7517:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message traps: 16[21031] trap invalid opcode ip:557d2973553e sp:7ffd704d97b8 error:0 in 8[557d29730000+7000] 14[24418]: segfault at 0 ip 000055972bb7a200 sp 00007fffb509c7d8 error 6 in 14[55972bb78000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Lustre: 12336:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 12336:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 3 previous similar messages Lustre: 12336:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 12336:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 197 previous similar messages Lustre: 12336:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 12336:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 197 previous similar messages Lustre: 12336:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 12336:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 197 previous similar messages Lustre: 12336:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 12336:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 197 previous similar messages Lustre: 12336:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 12336:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 197 previous similar messages LustreError: 6709:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0001_UUID lock: ffff968a987efc00/0xfd07213738d91006 lrc: 3/0,0 mode: PR/PR res: [0x240000403:0xff:0x0].0x0 bits 0x1b/0x0 rrc: 7 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xfd07213738d90e70 expref: 126 pid: 11333 timeout: 407 lvb_type: 0 LustreError: lustre-MDT0001-mdc-ffff968a69e5e800: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0001-mdc-ffff968a69e5e800: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0001-mdc-ffff968a69e5e800: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. LustreError: 17543:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff968a69e5e800: inode [0x240000403:0x17:0x0] mdc close failed: rc = -108 LustreError: 17543:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message LustreError: 18315:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -108 Lustre: lustre-MDT0001-mdc-ffff968a69e5e800: Connection restored to (at 0@lo) Lustre: 12108:0:(mdd_dir.c:4838:mdd_migrate_object()) lustre-MDD0001: [0x280000403:0x1:0x0]/2 is open, migrate only dentry LustreError: 15125:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) lustre-MDD0000: '9' migration was interrupted, run 'lfs migrate -m 2 -c 3 -H crush 9' to finish migration: rc = -1 LustreError: 15125:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x2:0x0]/9 failed: rc = -1 LustreError: 15125:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 1 previous similar message Lustre: 11976:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0001: opcode 2: before 503 < left 1233, rollback = 2 Lustre: 11976:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 193 previous similar messages Lustre: 11976:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/9, destroy: 0/0/0 Lustre: 11976:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 11976:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 15/1233/0 Lustre: 11976:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 11976:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 4/54/0 Lustre: 11976:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 11976:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 11/239/4, delete: 0/0/0 Lustre: 11976:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 11976:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 5/5/0, ref_del: 0/0/0 Lustre: 11976:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 7 previous similar messages LustreError: 8857:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0001: [0x240000403:0x8a:0x0] migrate mdt count mismatch 3 != 1 LustreError: 13094:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0000: migrate [0x240000403:0x8a:0x0]/2 failed: rc = -2 LustreError: 13094:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 4 previous similar messages Lustre: 10587:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 10587:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 7 previous similar messages LustreError: 73965:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0000: [0x200000403:0xbb:0x0] migrate mdt count mismatch 3 != 1 Lustre: 13311:0:(mdt_reint.c:2484:mdt_reint_migrate()) lustre-MDT0000: [0x200000403:0x2:0x0]/8 is open, migrate only dentry LustreError: 74335:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000404:0x1c6f:0x0]: rc = -5 LustreError: 74335:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 10050:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0000: [0x200000403:0xb4:0x0] migrate mdt count mismatch 3 != 1 7[75325]: segfault at 8 ip 00007f40112b1875 sp 00007ffd65cbfcc0 error 4 in ld-2.28.so[7f4011290000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 13094:0:(mdd_dir.c:4838:mdd_migrate_object()) lustre-MDD0002: [0x280000403:0x1:0x0]/5 is open, migrate only dentry Lustre: 13094:0:(mdd_dir.c:4838:mdd_migrate_object()) Skipped 8 previous similar messages LustreError: 75325:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff968a69e5e800: inode [0x200000405:0x165:0x0] mdc close failed: rc = -13 LustreError: 75325:0:(file.c:248:ll_close_inode_openhandle()) Skipped 6 previous similar messages LustreError: 16615:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff968a716c2000: dir page locate: [0x200000406:0x31:0x0] at 0: rc -5 Lustre: dir [0x240000405:0x18d:0x0] stripe 3 readdir failed: -2, directory is partially accessed! Lustre: Skipped 4 previous similar messages LustreError: 12464:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) lustre-MDD0002: '15' migration was interrupted, run 'lfs migrate -m 2 -c 1 -H crush 15' to finish migration: rc = -1 LustreError: 12464:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) Skipped 1 previous similar message LustreError: 12464:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000403:0x1:0x0]/15 failed: rc = -1 LustreError: 12464:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 2 previous similar messages Lustre: 12061:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0000: opcode 2: before 504 < left 966, rollback = 2 Lustre: 12061:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 405 previous similar messages LustreError: 12520:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0001: migrate [0x200000406:0x27b:0x0]/15 failed: rc = -116 LustreError: 12520:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 1 previous similar message LustreError: 11976:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) lustre-MDD0000: '13' migration was interrupted, run 'lfs migrate -m 1 -c 3 -H crush 13' to finish migration: rc = -1 Lustre: 7516:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 618, rollback = 7 Lustre: 7516:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 17 previous similar messages Lustre: 12520:0:(mdd_dir.c:4838:mdd_migrate_object()) lustre-MDD0001: [0x240000403:0x1:0x0]/14 is open, migrate only dentry Lustre: 12520:0:(mdd_dir.c:4838:mdd_migrate_object()) Skipped 4 previous similar messages Lustre: dir [0x280000404:0x1d8e:0x0] stripe 3 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message LustreError: 74095:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) lustre: failure inode [0x240000405:0x18d:0x0] get parent: rc = -2 LustreError: 6722:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000405:0x18d:0x0]: rc = -2 LustreError: 74096:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff968a69e5e800: inode [0x240000405:0x18d:0x0] mdc close failed: rc = -2 LustreError: 6709:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: mdt-lustre-MDT0001_UUID lock: ffff968a5ec0be00/0xfd07213738eb10d7 lrc: 3/0,0 mode: PR/PR res: [0x240000406:0x140:0x0].0x0 bits 0x1b/0x0 rrc: 6 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xfd07213738eb10b4 expref: 134 pid: 74056 timeout: 537 lvb_type: 0 Lustre: 12727:0:(mdd_dir.c:4838:mdd_migrate_object()) lustre-MDD0001: [0x240000403:0x1:0x0]/5 is open, migrate only dentry LustreError: lustre-MDT0001-mdc-ffff968a69e5e800: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: 17061:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1753119622 with bad export cookie 18232578137494018259 Lustre: lustre-MDT0001-mdc-ffff968a69e5e800: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0001-mdc-ffff968a69e5e800: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. LustreError: 13532:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0000: migrate [0x200000403:0x2:0x0]/11 failed: rc = -2 LustreError: Skipped 3 previous similar messages LustreError: 82723:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0001-mdc-ffff968a69e5e800: [0x240000403:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 13532:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 4 previous similar messages LustreError: 119571:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff968a69e5e800: inode [0x240000403:0x4:0x0] mdc close failed: rc = -108 LustreError: 82723:0:(mdc_request.c:1477:mdc_read_page()) Skipped 25 previous similar messages LustreError: 119571:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message LustreError: 119571:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0001-mdc-ffff968a69e5e800: namespace resource [0x240000403:0x1:0x0].0x0 (ffff968a5ef44c00) refcount nonzero (1) after lock cleanup; forcing cleanup. LustreError: 119571:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 6 previous similar messages Lustre: lustre-MDT0001-mdc-ffff968a69e5e800: Connection restored to (at 0@lo) Lustre: 11795:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff968a54eef700 x1838278670846976/t4294975492(0) o101->070aa2f4-d916-4640-a923-4e9d0cf8eeda@0@lo:707/0 lens 376/864 e 0 to 0 dl 1753119767 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 LustreError: 6731:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) lustre-MDD0001: '19' migration was interrupted, run 'lfs migrate -m 0 -c 2 -H crush 19' to finish migration: rc = -1 Lustre: 15706:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0001: opcode 2: before 504 < left 831, rollback = 2 Lustre: 15706:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 319 previous similar messages Lustre: 15706:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 1/4/4, destroy: 0/0/0 Lustre: 15706:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 749 previous similar messages Lustre: 15706:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 10/831/0 Lustre: 15706:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 749 previous similar messages Lustre: 15706:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/21/0, punch: 0/0/0, quota 7/129/7 Lustre: 15706:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 749 previous similar messages Lustre: 15706:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 2/33/1, delete: 0/0/0 Lustre: 15706:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 749 previous similar messages Lustre: 15706:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 15706:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 749 previous similar messages LustreError: 10627:0:(lustre_lmv.h:500:lmv_is_sane()) unknown layout LMV: magic=0xcd40cd0 count=4 index=3 hash=crush:0x82000003 version=1 migrate_offset=3 migrate_hash=fnv_1a_64:2 pool= Lustre: 73946:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0x24d:0x0] with magic=0xbd60bd0 Lustre: 18722:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000404:0x1f1f:0x0] with magic=0xbd60bd0 Lustre: 18722:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message LustreError: 121459:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000406:0x166:0x0]: rc = -5 LustreError: 121459:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 1 previous similar message LustreError: 121459:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 121459:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 1 previous similar message LustreError: 73946:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0000: [0x200000405:0x18f5:0x0] migrate mdt count mismatch 3 != 2 Lustre: 11821:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0x271:0x0] with magic=0xbd60bd0 Lustre: 11821:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages LustreError: 121746:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000405:0x2a8:0x0]: rc = -5 LustreError: 121746:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 2 previous similar messages LustreError: 121746:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 121746:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 2 previous similar messages LustreError: 6731:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) lustre-MDD0002: '4' migration was interrupted, run 'lfs migrate -m 0 -c 1 -H crush 4' to finish migration: rc = -1 LustreError: 6731:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) Skipped 2 previous similar messages Lustre: dir [0x240000407:0x3d:0x0] stripe 3 readdir failed: -2, directory is partially accessed! Lustre: 34254:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 516 < left 618, rollback = 7 Lustre: 34254:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 5 previous similar messages Lustre: 11838:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000404:0x1f70:0x0] with magic=0xbd60bd0 Lustre: 11838:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 5 previous similar messages LustreError: 123771:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000406:0x16a:0x0]: rc = -5 LustreError: 123771:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 1 previous similar message LustreError: 123771:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 123771:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 1 previous similar message 2[124813]: segfault at 8 ip 00007fda22e49875 sp 00007ffd459e40c0 error 4 in ld-2.28.so[7fda22e28000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 12464:0:(mdt_reint.c:2484:mdt_reint_migrate()) lustre-MDT0001: [0x240000403:0x1:0x0]/18 is open, migrate only dentry LustreError: 127038:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000407:0x195:0x0]: rc = -5 LustreError: 127038:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 4 previous similar messages LustreError: 127038:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 127038:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 4 previous similar messages LustreError: 19996:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) lustre-MDD0000: '18' migration was interrupted, run 'lfs migrate -m 1 -c 3 -H crush 18' to finish migration: rc = -1 LustreError: 19996:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) Skipped 1 previous similar message Lustre: 9739:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000406:0x16a:0x0] with magic=0xbd60bd0 Lustre: 9739:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message LustreError: 13532:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000403:0x1:0x0]/18 failed: rc = -1 LustreError: 13532:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 14 previous similar messages LustreError: 16502:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0000: failed to get lu_attr of [0x200000405:0x1a34:0x0]: rc = -2 LustreError: 16502:0:(mdd_object.c:3901:mdd_close()) Skipped 1 previous similar message LustreError: 129964:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0002-mdc-ffff968a69e5e800: dir page locate: [0x280000404:0x3:0x0] at 0: rc -5 Lustre: dir [0x240000407:0x149:0x0] stripe 2 readdir failed: -5, directory is partially accessed! LustreError: 129964:0:(mdc_request.c:1492:mdc_read_page()) Skipped 2 previous similar messages Lustre: Skipped 1 previous similar message Lustre: 73950:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000406:0x731:0x0] with magic=0xbd60bd0 Lustre: 73950:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 7 previous similar messages LustreError: 83804:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) lustre-MDD0002: '2' migration was interrupted, run 'lfs migrate -m 0 -c 1 -H crush 2' to finish migration: rc = -1 LustreError: 83804:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) Skipped 1 previous similar message LustreError: 132948:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000405:0x401:0x0]: rc = -5 LustreError: 132948:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 2 previous similar messages LustreError: 132948:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 132948:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 2 previous similar messages LustreError: 134219:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0002-mdc-ffff968a69e5e800: dir page locate: [0x280000403:0x4ec:0x0] at 0: rc -5 Lustre: dir [0x280000403:0x699:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 3 previous similar messages LustreError: 91588:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 7 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 151:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 1 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 151:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 1 previous similar message Lustre: 9739:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0x6bf:0x0] with magic=0xbd60bd0 Lustre: 9739:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 61 previous similar messages Lustre: 83804:0:(mdd_dir.c:4838:mdd_migrate_object()) lustre-MDD0000: [0x200000406:0x864:0x0]/17 is open, migrate only dentry Lustre: 83804:0:(mdd_dir.c:4838:mdd_migrate_object()) Skipped 21 previous similar messages LustreError: 112129:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 1 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 137018:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff968a69e5e800: dir page locate: [0x200000406:0x864:0x0] at 0: rc -5 LustreError: 137018:0:(mdc_request.c:1492:mdc_read_page()) Skipped 6 previous similar messages Lustre: 15576:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0001: opcode 2: before 506 < left 1606, rollback = 2 Lustre: 15576:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1097 previous similar messages LustreError: 10481:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000407:0x46d:0x0]: rc = -2 LustreError: 138074:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff968a69e5e800: inode [0x240000407:0x46d:0x0] mdc close failed: rc = -2 LustreError: 138074:0:(file.c:248:ll_close_inode_openhandle()) Skipped 10 previous similar messages LustreError: 140791:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff968a69e5e800: cannot apply new layout on [0x240000405:0x3c9:0x0] : rc = -5 LustreError: 140791:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000405:0x3c9:0x0] error -5. LustreError: 13094:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) lustre-MDD0002: '0' migration was interrupted, run 'lfs migrate -m 0 -c 2 -H crush 0' to finish migration: rc = -1 LustreError: 13094:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) Skipped 6 previous similar messages Lustre: dir [0x240000407:0x648:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 12 previous similar messages LustreError: 141084:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000407:0x5cd:0x0]: rc = -5 LustreError: 141084:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 27 previous similar messages LustreError: 141084:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 141084:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 27 previous similar messages LustreError: 316:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 18 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 6722:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0002: failed to get lu_attr of [0x280000403:0x995:0x0]: rc = -2 LustreError: 142635:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff968a69e5e800: cannot apply new layout on [0x240000407:0x5cd:0x0] : rc = -5 LustreError: 142635:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000407:0x5cd:0x0] error -5. LustreError: 143263:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff968a69e5e800: cannot apply new layout on [0x240000405:0x3c9:0x0] : rc = -5 LustreError: 144623:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff968a69e5e800: cannot apply new layout on [0x240000407:0x5cd:0x0] : rc = -5 LustreError: 144623:0:(lov_object.c:1350:lov_layout_change()) Skipped 1 previous similar message LustreError: 156:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 14 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 Lustre: 13094:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0002-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x280000403:0xa85:0x0] with magic=0xbd60bd0 LustreError: 156:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 1 previous similar message Lustre: 13094:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 37 previous similar messages LustreError: 144818:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x280000403:0xaa2:0x0] error -5. Lustre: 7515:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 7515:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 11 previous similar messages LustreError: 146704:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff968a69e5e800: cannot apply new layout on [0x280000403:0xaa2:0x0] : rc = -5 LustreError: 146704:0:(lov_object.c:1350:lov_layout_change()) Skipped 6 previous similar messages LustreError: 146619:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff968a69e5e800: dir page locate: [0x200000400:0x27:0x0] at 0: rc -5 LustreError: 13532:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0002: migrate [0x200000403:0x1:0x0]/13 failed: rc = -1 LustreError: 13532:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 14 previous similar messages LustreError: 156:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 5 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 156:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 2 previous similar messages LustreError: 150016:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff968a69e5e800: cannot apply new layout on [0x240000405:0x3c9:0x0] : rc = -5 LustreError: 150016:0:(lov_object.c:1350:lov_layout_change()) Skipped 4 previous similar messages LustreError: 150016:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000405:0x3c9:0x0] error -5. 10[151329]: segfault at 7ffc8cb6e084 ip 00007ffc8cb6e084 sp 00007ffc8cb6ca40 error 15 Code: 6f 6d 65 2f 67 72 65 65 6e 2f 67 69 74 2f 6c 75 73 74 72 65 2d 72 65 6c 65 61 73 65 2f 6c 75 73 74 72 65 2f 74 65 73 74 73 00 <6d> 64 73 31 5f 4d 4f 55 4e 54 3d 2f 6d 6e 74 2f 6c 75 73 74 72 65 LustreError: 112129:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 15 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 112129:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 2 previous similar messages LustreError: 154677:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff968a69e5e800: cannot apply new layout on [0x240000407:0x86e:0x0] : rc = -5 LustreError: 154677:0:(lov_object.c:1350:lov_layout_change()) Skipped 18 previous similar messages LustreError: 154677:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000407:0x86e:0x0] error -5. LustreError: 154677:0:(vvp_io.c:1909:vvp_io_init()) Skipped 2 previous similar messages LustreError: 6709:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0001_UUID lock: ffff968aa2424e00/0xfd07213738f40f67 lrc: 3/0,0 mode: CR/CR res: [0x240000407:0x17d:0x0].0x0 bits 0xa/0x0 rrc: 7 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xfd07213738f40aeb expref: 257 pid: 11838 timeout: 666 lvb_type: 0 LustreError: lustre-MDT0001-mdc-ffff968a69e5e800: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0001-mdc-ffff968a69e5e800: Connection to lustre-MDT0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0001-mdc-ffff968a69e5e800: This client was evicted by lustre-MDT0001; in progress operations using this service will fail. LustreError: 11077:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 00000000b63ce910 ns: mdt-lustre-MDT0001_UUID lock: ffff968aa843d000/0xfd07213738f448cc lrc: 3/0,0 mode: PR/PR res: [0x240000407:0x17d:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xfd07213738f44894 expref: 8 pid: 11077 timeout: 0 lvb_type: 0 LustreError: 156411:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0001-mdc-ffff968a69e5e800: [0x240000402:0x23:0x0] lock enqueue fails: rc = -108 Lustre: dir [0x200000406:0x189c:0x0] stripe 1 readdir failed: -108, directory is partially accessed! Lustre: Skipped 5 previous similar messages LustreError: 11077:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) Skipped 7 previous similar messages LustreError: 156411:0:(mdc_request.c:1477:mdc_read_page()) Skipped 10 previous similar messages Lustre: lustre-MDT0001-mdc-ffff968a69e5e800: Connection restored to (at 0@lo) LustreError: 120380:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) lustre: failure inode [0x280000404:0x2049:0x0] get parent: rc = -116 LustreError: 120380:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) Skipped 1 previous similar message Lustre: 13532:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/11, destroy: 0/0/0 Lustre: 13532:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 2719 previous similar messages Lustre: 13532:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2/2/0, xattr_set: 16/1322/0 Lustre: 13532:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 2719 previous similar messages Lustre: 13532:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 4/54/0 Lustre: 13532:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 2719 previous similar messages Lustre: 13532:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 12/275/4, delete: 0/0/0 Lustre: 13532:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 2718 previous similar messages Lustre: 13532:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 6/6/0, ref_del: 0/0/0 Lustre: 13532:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 2719 previous similar messages LustreError: 9946:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0000: failed to get lu_attr of [0x200000405:0x18f5:0x0]: rc = -2 LustreError: 9946:0:(mdd_object.c:3901:mdd_close()) Skipped 2 previous similar messages 17[156992]: segfault at 8 ip 00007f9acc7bc875 sp 00007ffced51c6e0 error 4 in ld-2.28.so[7f9acc79b000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 6719:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 00000000061dec62 ns: mdt-lustre-MDT0000_UUID lock: ffff968a987ec400/0xfd072137390c70a4 lrc: 3/0,0 mode: PR/PR res: [0x200000402:0x23:0x0].0x0 bits 0x12/0x0 rrc: 11 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xfd072137390c7081 expref: 468 pid: 6719 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff968a69e5e800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 6724:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff968a54e30700 x1838278710088064/t0(0) o104->lustre-MDT0000@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 158274:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0000-mdc-ffff968a69e5e800: dir page locate: [0x200000402:0x23:0x0] at 0: rc -5 LustreError: 158274:0:(mdc_request.c:1492:mdc_read_page()) Skipped 1 previous similar message LustreError: 158211:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x280000403:0xf9a:0x0] error: rc = -5 LustreError: 158211:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 8 previous similar messages LustreError: 158618:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 158618:0:(llite_lib.c:2039:ll_md_setattr()) Skipped 1 previous similar message LustreError: 130071:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) lustre: failure inode [0x200000405:0xcd:0x0] get parent: rc = -108 LustreError: 157809:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff968a69e5e800: [0x200000402:0x24:0x0] lock enqueue fails: rc = -108 LustreError: 158705:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff968a69e5e800: namespace resource [0x200000007:0x1:0x0].0x0 (ffff968aa10f7700) refcount nonzero (1) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff968a69e5e800: Connection restored to (at 0@lo) LustreError: 160067:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000406:0x69c:0x0] error -5. LustreError: 160067:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message LustreError: 10627:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) lustre-MDD0001: '16' migration was interrupted, run 'lfs migrate -m 0 -c 2 -H crush 16' to finish migration: rc = -1 LustreError: 10627:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) Skipped 12 previous similar messages LustreError: 162395:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000405:0x3c9:0x0]: rc = -5 LustreError: 162395:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 73 previous similar messages LustreError: 162395:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 162395:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 73 previous similar messages 6[163966]: segfault at 8 ip 00007f9dad10e875 sp 00007ffe14faa7f0 error 4 in ld-2.28.so[7f9dad0ed000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 4[164430]: segfault at 8 ip 00007fb62658e875 sp 00007fffea8bf570 error 4 in ld-2.28.so[7fb62656d000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 18[167425]: segfault at 0 ip 00005562b61d8b47 sp 00007ffe0c1acb50 error 6 in 18[5562b61d4000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 LustreError: 143879:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0000: failed to get lu_attr of [0x200000406:0x1787:0x0]: rc = -2 LustreError: 168018:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff968a716c2000: cannot apply new layout on [0x200000406:0x69c:0x0] : rc = -5 LustreError: 168018:0:(lov_object.c:1350:lov_layout_change()) Skipped 7 previous similar messages Lustre: 6733:0:(mdt_reint.c:2484:mdt_reint_migrate()) lustre-MDT0001: [0x240000403:0x1:0x0]/5 is open, migrate only dentry Lustre: 15300:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000407:0xb0d:0x0] with magic=0xbd60bd0 Lustre: 15300:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 65 previous similar messages LustreError: 6709:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff968a95a2be00/0xfd07213739175257 lrc: 3/0,0 mode: PR/PR res: [0x200000407:0x1ee:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xfd07213739175193 expref: 225 pid: 9739 timeout: 810 lvb_type: 0 LustreError: 6709:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 1 previous similar message Lustre: 12464:0:(mdd_dir.c:4838:mdd_migrate_object()) lustre-MDD0000: [0x200000407:0x1e9:0x0]/7 is open, migrate only dentry Lustre: 12464:0:(mdd_dir.c:4838:mdd_migrate_object()) Skipped 40 previous similar messages LustreError: 13532:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0002: migrate [0x280000403:0x1267:0x0]/0 failed: rc = -116 LustreError: 13532:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 20 previous similar messages LustreError: 7527:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1753119896 with bad export cookie 18232578137496781159 LustreError: lustre-MDT0000-mdc-ffff968a69e5e800: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff968a69e5e800: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff968a69e5e800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: Skipped 3 previous similar messages LustreError: 171439:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff968a69e5e800: inode [0x200000405:0x1d86:0x0] mdc close failed: rc = -108 LustreError: 9580:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000407:0x1ee:0x0] error -108. LustreError: 170934:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 170934:0:(llite_lib.c:2039:ll_md_setattr()) Skipped 1 previous similar message LustreError: 171439:0:(file.c:248:ll_close_inode_openhandle()) Skipped 44 previous similar messages LustreError: 171542:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 171542:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 416 previous similar messages LustreError: 171649:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) lustre: failure inode [0x200000407:0xce:0x0] get parent: rc = -108 LustreError: 171649:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) Skipped 17 previous similar messages Lustre: lustre-MDT0000-mdc-ffff968a69e5e800: Connection restored to (at 0@lo) Lustre: 10587:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 516 < left 618, rollback = 7 Lustre: 10587:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 45 previous similar messages Lustre: 19996:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0000: opcode 2: before 506 < left 905, rollback = 2 Lustre: 19996:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 2403 previous similar messages 13[173548]: segfault at 8 ip 00007f3998116875 sp 00007fff194543c0 error 4 in ld-2.28.so[7f39980f5000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 174336:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000405:0x1d51:0x0]: rc = -5 LustreError: 174336:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 19 previous similar messages LustreError: 174336:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 174336:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 19 previous similar messages LustreError: 12727:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) lustre-MDD0000: '4' migration was interrupted, run 'lfs migrate -m 2 -c 3 -H crush 4' to finish migration: rc = -1 LustreError: 12727:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) Skipped 1 previous similar message Lustre: dir [0x280000403:0xf9a:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 23 previous similar messages LustreError: 176811:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff968a716c2000: cannot apply new layout on [0x200000406:0x69c:0x0] : rc = -5 LustreError: 169192:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0002-mdc-ffff968a716c2000: dir page locate: [0x280000404:0x21a6:0x0] at 0: rc -5 LustreError: 169192:0:(mdc_request.c:1492:mdc_read_page()) Skipped 4 previous similar messages 7[178485]: segfault at 8 ip 00007f45b85ad875 sp 00007fffa0a330e0 error 4 in ld-2.28.so[7f45b858c000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 9946:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000408:0x16d:0x0]: rc = -2 LustreError: 9946:0:(mdd_object.c:3901:mdd_close()) Skipped 3 previous similar messages traps: 19[183747] trap invalid opcode ip:55e0a5c74f41 sp:8c1eb130 error:0 in 19[55e0a5c6f000+7000] Lustre: 10054:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000405:0x1cd1:0x0] with magic=0xbd60bd0 Lustre: 10054:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 25 previous similar messages 17[188237]: segfault at 8 ip 00007fee7e5c9875 sp 00007ffd4753c910 error 4 in ld-2.28.so[7fee7e5a8000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 187180:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0002-mdc-ffff968a716c2000: dir page locate: [0x280000403:0x126d:0x0] at 0: rc -5 LustreError: 187180:0:(mdc_request.c:1492:mdc_read_page()) Skipped 3 previous similar messages LustreError: 11333:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0001: [0x240000407:0xb7e:0x0] migrate mdt count mismatch 1 != 2 17[192470]: segfault at 8 ip 00007f764cba7875 sp 00007ffe0b6cfb10 error 4 in ld-2.28.so[7f764cb86000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 17[193167]: segfault at 8 ip 00007f62279a5875 sp 00007ffe4d385f70 error 4 in ld-2.28.so[7f6227984000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 17[192859]: segfault at 8 ip 00007f85c2b72875 sp 00007fffd744c140 error 4 in ld-2.28.so[7f85c2b51000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 11[193588]: segfault at 8 ip 00007f54f66a9875 sp 00007ffe4fb4e230 error 4 in ld-2.28.so[7f54f6688000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 7[195465]: segfault at 8 ip 00007f0949b55875 sp 00007ffde1510c10 error 4 in ld-2.28.so[7f0949b34000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 191431:0:(llite_lib.c:1888:ll_update_lsm_md()) lustre: [0x240000408:0x84b:0x0] dir layout mismatch: LustreError: 191431:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=10 count=3 index=1 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 191431:0:(lustre_lmv.h:167:lmv_stripe_object_dump()) stripe[0] [0x240000400:0x4e:0x0] LustreError: 191431:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=4 index=1 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=3 migrate_hash=fnv_1a_64:2 pool= LustreError: 191632:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=13 count=3 index=1 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 191632:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=4 index=1 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=3 migrate_hash=fnv_1a_64:2 pool= LustreError: 191628:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=16 count=3 index=1 hash=crush:0x2000003 max_inherit=0 max_inherit_rr=0 version=2 migrate_offset=0 migrate_hash=invalid:0 pool= LustreError: 191628:0:(lustre_lmv.h:160:lmv_stripe_object_dump()) dump LMV: magic=0xcd20cd0 refs=1 count=4 index=1 hash=crush:0x82000003 max_inherit=0 max_inherit_rr=0 version=1 migrate_offset=3 migrate_hash=fnv_1a_64:2 pool= LustreError: 195476:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x240000405:0xb05:0x0] error -5. LustreError: 11304:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0001: [0x240000408:0x8bf:0x0] migrate mdt count mismatch 2 != 3 LustreError: 11304:0:(mdt_xattr.c:406:mdt_dir_layout_update()) Skipped 1 previous similar message 9[202133]: segfault at 8 ip 00007fd8ba3a0875 sp 00007ffe8f185650 error 4 in ld-2.28.so[7fd8ba37f000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 195566:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) lustre: failure inode [0x280000403:0x171d:0x0] get parent: rc = -116 LustreError: 195566:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) Skipped 2 previous similar messages 9[203697]: segfault at 8 ip 00007f9cb1375875 sp 00007fffa28638d0 error 4 in ld-2.28.so[7f9cb1354000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 11872:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 3/12/4, destroy: 1/4/0 Lustre: 11872:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 3066 previous similar messages Lustre: 11872:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 4/4/0, xattr_set: 13/922/0 Lustre: 11872:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 3066 previous similar messages Lustre: 11872:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 5/55/0, punch: 0/0/0, quota 7/129/0 Lustre: 11872:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 3066 previous similar messages Lustre: 11872:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 15/263/3, delete: 3/6/0 Lustre: 11872:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 3066 previous similar messages Lustre: 11872:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 8/8/0, ref_del: 3/3/0 Lustre: 11872:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 3066 previous similar messages LustreError: 210571:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0002-mdc-ffff968a69e5e800: dir page locate: [0x280000400:0x30:0x0] at 0: rc -5 5[217288]: segfault at 8 ip 00007fd518f21875 sp 00007ffcd2f00bb0 error 4 LustreError: 210571:0:(mdc_request.c:1492:mdc_read_page()) Skipped 6 previous similar messages in ld-2.28.so[7fd518f00000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 5[217683]: segfault at 8 ip 00007f763715c875 sp 00007ffc14b57970 error 4 in ld-2.28.so[7f763713b000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 5[222499]: segfault at 8 ip 00007f87bc6ec875 sp 00007ffce19ae660 error 4 in ld-2.28.so[7f87bc6cb000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 6709:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff968ad2410400/0xfd0721373947f677 lrc: 3/0,0 mode: PR/PR res: [0x200000405:0x2877:0x0].0x0 bits 0x1b/0x0 rrc: 6 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xfd0721373947f607 expref: 465 pid: 9863 timeout: 1064 lvb_type: 0 LustreError: 11838:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 000000004c8c4129 ns: mdt-lustre-MDT0000_UUID lock: ffff968abda11600/0xfd07213739485c4b lrc: 3/0,0 mode: PR/PR res: [0x200000405:0x2877:0x0].0x0 bits 0x1b/0x0 rrc: 3 type: IBT gid 0 flags: 0x50200400000020 nid: 0@lo remote: 0xfd07213739485c1a expref: 592 pid: 11838 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff968a716c2000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff968a716c2000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff968a716c2000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: lustre-MDT0000-mdc-ffff968a69e5e800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 224512:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000405:0x28aa:0x0] error: rc = -5 LustreError: 224147:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 224512:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 369 previous similar messages LustreError: 224147:0:(llite_lib.c:2039:ll_md_setattr()) Skipped 10 previous similar messages LustreError: 223739:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff968a716c2000: [0x200000403:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 223739:0:(mdc_request.c:1477:mdc_read_page()) Skipped 1 previous similar message LustreError: 223739:0:(statahead.c:1807:is_first_dirent()) lustre: reading dir [0x200000403:0x1:0x0] at 0 stat_pid = 223739 : rc = -108 LustreError: 223739:0:(statahead.c:1807:is_first_dirent()) Skipped 2 previous similar messages Lustre: lustre-MDT0000-mdc-ffff968a716c2000: Connection restored to (at 0@lo) LustreError: 10007:0:(mdt_xattr.c:406:mdt_dir_layout_update()) lustre-MDT0001: [0x240000405:0x10d3:0x0] migrate mdt count mismatch 2 != 3 LustreError: 10007:0:(mdt_xattr.c:406:mdt_dir_layout_update()) Skipped 2 previous similar messages Lustre: 15576:0:(mdd_dir.c:4838:mdd_migrate_object()) lustre-MDD0001: [0x200000403:0x2:0x0]/16 is open, migrate only dentry Lustre: 15576:0:(mdd_dir.c:4838:mdd_migrate_object()) Skipped 58 previous similar messages 5[273156]: segfault at 8 ip 00007f3a44786875 sp 00007ffecbd1dbd0 error 4 in ld-2.28.so[7f3a44765000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 83873:0:(mdt_reint.c:2564:mdt_reint_migrate()) lustre-MDT0001: migrate [0x200000408:0xabd:0x0]/16 failed: rc = -1 LustreError: 83873:0:(mdt_reint.c:2564:mdt_reint_migrate()) Skipped 66 previous similar messages Lustre: 13099:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0002: opcode 2: before 516 < left 906, rollback = 2 Lustre: 13099:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 3139 previous similar messages LustreError: 187152:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 4 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 187152:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 1 previous similar message Lustre: 197341:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 515 < left 618, rollback = 7 Lustre: 197341:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 117 previous similar messages LustreError: 275334:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000403:0x1e58:0x0]: rc = -5 LustreError: 275334:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 83 previous similar messages LustreError: 275334:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 275334:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 83 previous similar messages LustreError: 15091:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) lustre-MDD0002: '17' migration was interrupted, run 'lfs migrate -m 1 -c 2 -H crush 17' to finish migration: rc = -1 LustreError: 15091:0:(mdd_dir.c:4759:mdd_migrate_cmd_check()) Skipped 22 previous similar messages Lustre: 12520:0:(mdt_reint.c:2484:mdt_reint_migrate()) lustre-MDT0002: [0x280000403:0x1:0x0]/15 is open, migrate only dentry LustreError: 280940:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff968a69e5e800: inode [0x200000408:0x928:0x0] mdc close failed: rc = -13 LustreError: 280940:0:(file.c:248:ll_close_inode_openhandle()) Skipped 63 previous similar messages LustreError: 278634:0:(mdc_request.c:1492:mdc_read_page()) lustre-MDT0001-mdc-ffff968a69e5e800: dir page locate: [0x240000400:0x67:0x0] at 0: rc -5 Lustre: dir [0x280000403:0x1f47:0x0] stripe 3 readdir failed: -5, directory is partially accessed! LustreError: 278634:0:(mdc_request.c:1492:mdc_read_page()) Skipped 2 previous similar messages Lustre: Skipped 37 previous similar messages Lustre: 8857:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0001-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x240000408:0xd2a:0x0] with magic=0xbd60bd0 Lustre: 8857:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 99 previous similar messages LustreError: 10481:0:(mdd_object.c:3901:mdd_close()) lustre-MDD0001: failed to get lu_attr of [0x240000405:0x1838:0x0]: rc = -2 LustreError: 10481:0:(mdd_object.c:3901:mdd_close()) Skipped 5 previous similar messages | Link to test |
racer test 2: racer rename: centos-95.localnet DURATION=2700 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 2a4b6c067 P4D 2a4b6c067 PUD 2d3b40067 PMD 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 6 PID: 678844 Comm: ll_sa_678679 Kdump: loaded Tainted: G W O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 e1 cd 87 ff 48 83 05 a9 53 ce 02 01 39 05 67 34 75 01 77 cf 48 83 05 a9 53 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 b4 53 ce 02 01 83 f8 01 74 2b 48 83 05 b7 53 ce 02 RSP: 0018:ffffa84b52dbfe90 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000009 RDX: ffff8cb23e3b8160 RSI: ffff8cb2160ec648 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff8cb000005180 R09: 0000000000000000 R10: 0000000000000008 R11: 000000000000001f R12: ffff8cb2160ec600 R13: ffff8cb1214b5ab8 R14: ffff8cb2160ec2c8 R15: ffff8cb2160ec648 FS: 0000000000000000(0000) GS:ffff8cb23e380000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000025f221000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? update_load_avg+0x9f/0xa40 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CR2: 0000000000000008 | Lustre: 47472:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb095ff5e80 x1837782389073536/t4295117086(0) o101->d664c5e4-c84d-42d3-8b78-136f6d1ffbc8@0@lo:508/0 lens 376/24464 e 0 to 0 dl 1752648448 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 11535:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb077be3800 x1837782389447680/t4295186212(0) o101->d664c5e4-c84d-42d3-8b78-136f6d1ffbc8@0@lo:509/0 lens 384/43576 e 0 to 0 dl 1752648449 ref 1 fl Interpret:H/602/0 rc 0/0 job:'lfs.0' uid:0 gid:0 projid:0 Lustre: 47472:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb0e2fede80 x1837782390176256/t4295186420(0) o101->d664c5e4-c84d-42d3-8b78-136f6d1ffbc8@0@lo:513/0 lens 376/43576 e 0 to 0 dl 1752648453 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 ODEBUG: object 00000000cc0f9804 is on stack 000000003ae4a0aa, but NOT annotated. WARNING: CPU: 5 PID: 12621 at lib/debugobjects.c:368 __debug_object_init.cold.5+0x35/0x15f Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CPU: 5 PID: 12621 Comm: mdt00_024 Kdump: loaded Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:__debug_object_init.cold.5+0x35/0x15f Code: fe 83 48 83 05 33 38 0c 03 01 89 05 69 40 0c 03 65 48 8b 04 25 00 dd 01 00 48 8b 50 18 e8 43 87 99 ff 48 83 05 2b 38 0c 03 01 <0f> 0b 48 83 05 29 38 0c 03 01 48 83 05 29 38 0c 03 01 e9 7f ee ff RSP: 0018:ffffa84b531df510 EFLAGS: 00010002 RAX: 0000000000000050 RBX: ffffa84b531df618 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffff8cb23e35e5a8 RDI: ffff8cb23e35e5a8 RBP: ffffffff84705ca0 R08: 0000000000000000 R09: c0000000ffff7fff R10: 0000000000000001 R11: ffffa84b531df308 R12: ffffffff85ef1388 R13: 000000000000a020 R14: ffffffff85ef1380 R15: ffff8cb0f89e8d98 FS: 0000000000000000(0000) GS:ffff8cb23e340000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f18b525fe20 CR3: 000000015f934000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __warn+0xc8/0x150 ? __debug_object_init.cold.5+0x35/0x15f ? report_bug+0x113/0x140 ? do_error_trap+0xb6/0x130 ? do_invalid_op+0x46/0x60 ? __debug_object_init.cold.5+0x35/0x15f ? invalid_op+0x14/0x20 ? __debug_object_init.cold.5+0x35/0x15f ? lod_set_pool+0x270/0x270 [lod] debug_object_init+0x22/0x30 init_timer_key+0x28/0x120 lod_ost_alloc_qos+0x770/0x1c30 [lod] ? kmalloc_order+0xfb/0x120 ? slab_post_alloc_hook+0x66/0x380 ? lod_qos_prep_create+0x390/0x1be0 [lod] ? __kmalloc+0x1b4/0x4a0 lod_qos_prep_create+0x1378/0x1be0 [lod] ? lod_qos_parse_config+0x811/0x1090 [lod] lod_prepare_create+0x204/0x460 [lod] lod_declare_striped_create+0x270/0xf80 [lod] ? osd_trans_create+0x184/0x620 [osd_ldiskfs] ? do_raw_spin_unlock+0x75/0x190 ? _raw_spin_unlock+0x12/0x30 lod_declare_xattr_set+0x290/0x1320 [lod] dt_declare_xattr_set.constprop.38+0x7b/0x230 [mdd] ? mdd_trans_create+0x56/0x110 [mdd] mdd_create_data+0x5b2/0x930 [mdd] mdt_mfd_open+0x1457/0x16a0 [mdt] mdt_finish_open+0x845/0xe10 [mdt] mdt_open_by_fid_lock+0x9d8/0x1170 [mdt] mdt_reint_open+0x943/0x3c10 [mdt] ? sptlrpc_svc_alloc_rs+0x70/0x460 [ptlrpc] ? lustre_msg_add_version+0x29/0xd0 [ptlrpc] ? lustre_pack_reply_v2+0x282/0x380 [ptlrpc] ? ucred_set_audit_enabled.isra.12+0x28/0xa0 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_intent_open+0x180/0x5b0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_fixup_resent+0x2e0/0x2e0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ---[ end trace 546ab86463d48720 ]--- ODEBUG: object 000000005ea4efd3 is on stack 00000000a3d6b0a5, but NOT annotated. WARNING: CPU: 12 PID: 437229 at lib/debugobjects.c:368 __debug_object_init.cold.5+0x35/0x15f Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CPU: 12 PID: 437229 Comm: mdt00_060 Kdump: loaded Tainted: G W O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:__debug_object_init.cold.5+0x35/0x15f Code: fe 83 48 83 05 33 38 0c 03 01 89 05 69 40 0c 03 65 48 8b 04 25 00 dd 01 00 48 8b 50 18 e8 43 87 99 ff 48 83 05 2b 38 0c 03 01 <0f> 0b 48 83 05 29 38 0c 03 01 48 83 05 29 38 0c 03 01 e9 7f ee ff RSP: 0018:ffffa84b5912f4a0 EFLAGS: 00010002 RAX: 0000000000000050 RBX: ffffa84b5912f5a8 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffff8cb23e51e5a8 RDI: ffff8cb23e51e5a8 RBP: ffffffff84705ca0 R08: 0000000000000000 R09: c0000000ffff7fff R10: 0000000000000001 R11: ffffa84b5912f298 R12: ffffffff85eed788 R13: 0000000000006420 R14: ffffffff85eed780 R15: ffff8cb010f15e38 FS: 0000000000000000(0000) GS:ffff8cb23e500000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fa052e3f3e0 CR3: 0000000330c16000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __warn+0xc8/0x150 ? __debug_object_init.cold.5+0x35/0x15f ? report_bug+0x113/0x140 ? do_error_trap+0xb6/0x130 ? do_invalid_op+0x46/0x60 ? __debug_object_init.cold.5+0x35/0x15f ? invalid_op+0x14/0x20 ? __debug_object_init.cold.5+0x35/0x15f ? lod_set_pool+0x270/0x270 [lod] debug_object_init+0x22/0x30 init_timer_key+0x28/0x120 lod_ost_alloc_qos+0x770/0x1c30 [lod] ? string_nocheck+0x77/0xa0 ? string+0x58/0x70 ? slab_post_alloc_hook+0x66/0x380 ? lod_qos_prep_create+0x390/0x1be0 [lod] ? __kmalloc+0x1b4/0x4a0 lod_qos_prep_create+0x1378/0x1be0 [lod] lod_prepare_create+0x204/0x460 [lod] ? osd_declare_create+0x4a2/0x7a0 [osd_ldiskfs] lod_declare_striped_create+0x270/0xf80 [lod] ? lod_sub_declare_create+0x111/0x320 [lod] lod_declare_create+0x3d4/0x9c0 [lod] mdd_declare_create_object_internal+0x107/0x4a0 [mdd] mdd_declare_create_object.isra.25+0x55/0xc40 [mdd] mdd_declare_create+0x6a/0x6c0 [mdd] mdd_create+0x5bd/0x1d00 [mdd] ? mdt_version_save+0xa8/0x210 [mdt] mdt_reint_open+0x337c/0x3c10 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_intent_open+0x180/0x5b0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_fixup_resent+0x2e0/0x2e0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ---[ end trace 546ab86463d48721 ]--- ODEBUG: object 000000008199645b is on stack 00000000af7bd0e1, but NOT annotated. WARNING: CPU: 4 PID: 9775 at lib/debugobjects.c:368 __debug_object_init.cold.5+0x35/0x15f Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CPU: 4 PID: 9775 Comm: mdt00_006 Kdump: loaded Tainted: G W O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:__debug_object_init.cold.5+0x35/0x15f Code: fe 83 48 83 05 33 38 0c 03 01 89 05 69 40 0c 03 65 48 8b 04 25 00 dd 01 00 48 8b 50 18 e8 43 87 99 ff 48 83 05 2b 38 0c 03 01 <0f> 0b 48 83 05 29 38 0c 03 01 48 83 05 29 38 0c 03 01 e9 7f ee ff RSP: 0018:ffffa84b523df4a0 EFLAGS: 00010006 RAX: 0000000000000050 RBX: ffffa84b523df5a8 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffff8cb23e31e5a8 RDI: ffff8cb23e31e5a8 RBP: ffffffff84705ca0 R08: 0000000000000000 R09: c0000000ffff7fff R10: 0000000000000001 R11: ffffa84b523df298 R12: ffffffff85ef58c8 R13: 000000000000e560 R14: ffffffff85ef58c0 R15: ffff8cb0e2062dc0 FS: 0000000000000000(0000) GS:ffff8cb23e300000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f9202d1c750 CR3: 00000001948a6000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __warn+0xc8/0x150 ? __debug_object_init.cold.5+0x35/0x15f ? report_bug+0x113/0x140 ? do_error_trap+0xb6/0x130 ? do_invalid_op+0x46/0x60 ? __debug_object_init.cold.5+0x35/0x15f ? invalid_op+0x14/0x20 ? __debug_object_init.cold.5+0x35/0x15f ? lod_set_pool+0x270/0x270 [lod] debug_object_init+0x22/0x30 init_timer_key+0x28/0x120 lod_ost_alloc_qos+0x770/0x1c30 [lod] ? slab_post_alloc_hook+0x66/0x380 ? lod_qos_prep_create+0x390/0x1be0 [lod] ? __kmalloc+0x1b4/0x4a0 lod_qos_prep_create+0x1378/0x1be0 [lod] lod_prepare_create+0x204/0x460 [lod] ? osd_declare_create+0x4a2/0x7a0 [osd_ldiskfs] lod_declare_striped_create+0x270/0xf80 [lod] ? lod_sub_declare_create+0x111/0x320 [lod] lod_declare_create+0x3d4/0x9c0 [lod] ? osd_xattr_get+0x274/0x940 [osd_ldiskfs] mdd_declare_create_object_internal+0x107/0x4a0 [mdd] ? lod_alloc_comp_entries+0x2a7/0x650 [lod] mdd_declare_create_object.isra.25+0x55/0xc40 [mdd] mdd_declare_create+0x6a/0x6c0 [mdd] mdd_create+0x5bd/0x1d00 [mdd] ? mdt_version_save+0xa8/0x210 [mdt] mdt_reint_open+0x337c/0x3c10 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_intent_open+0x180/0x5b0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_fixup_resent+0x2e0/0x2e0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ---[ end trace 546ab86463d48722 ]--- ODEBUG: object 000000003472c0b8 is on stack 000000005ce3996f, but NOT annotated. WARNING: CPU: 12 PID: 91486 at lib/debugobjects.c:368 __debug_object_init.cold.5+0x35/0x15f Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CPU: 12 PID: 91486 Comm: mdt00_047 Kdump: loaded Tainted: G W O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:__debug_object_init.cold.5+0x35/0x15f Code: fe 83 48 83 05 33 38 0c 03 01 89 05 69 40 0c 03 65 48 8b 04 25 00 dd 01 00 48 8b 50 18 e8 43 87 99 ff 48 83 05 2b 38 0c 03 01 <0f> 0b 48 83 05 29 38 0c 03 01 48 83 05 29 38 0c 03 01 e9 7f ee ff RSP: 0018:ffffa84b50bdf4a0 EFLAGS: 00010002 RAX: 0000000000000050 RBX: ffffa84b50bdf5a8 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffff8cb23e51e5a8 RDI: ffff8cb23e51e5a8 RBP: ffffffff84705ca0 R08: 0000000000000000 R09: c0000000ffff7fff R10: 0000000000000001 R11: ffffa84b50bdf298 R12: ffffffff85f0f408 R13: 00000000000280a0 R14: ffffffff85f0f400 R15: ffff8cb0b782d0c8 FS: 0000000000000000(0000) GS:ffff8cb23e500000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fa7891d0860 CR3: 0000000330c16000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __warn+0xc8/0x150 ? __debug_object_init.cold.5+0x35/0x15f ? report_bug+0x113/0x140 ? do_error_trap+0xb6/0x130 ? do_invalid_op+0x46/0x60 ? __debug_object_init.cold.5+0x35/0x15f ? invalid_op+0x14/0x20 ? __debug_object_init.cold.5+0x35/0x15f ? lod_set_pool+0x270/0x270 [lod] debug_object_init+0x22/0x30 init_timer_key+0x28/0x120 lod_ost_alloc_qos+0x770/0x1c30 [lod] ? slab_post_alloc_hook+0x66/0x380 ? lod_qos_prep_create+0x390/0x1be0 [lod] ? __kmalloc+0x1b4/0x4a0 lod_qos_prep_create+0x1378/0x1be0 [lod] lod_prepare_create+0x204/0x460 [lod] ? osd_declare_create+0x4a2/0x7a0 [osd_ldiskfs] lod_declare_striped_create+0x270/0xf80 [lod] ? lod_sub_declare_create+0x111/0x320 [lod] lod_declare_create+0x3d4/0x9c0 [lod] ? osd_xattr_get+0x274/0x940 [osd_ldiskfs] mdd_declare_create_object_internal+0x107/0x4a0 [mdd] ? lod_alloc_comp_entries+0x2a7/0x650 [lod] mdd_declare_create_object.isra.25+0x55/0xc40 [mdd] mdd_declare_create+0x6a/0x6c0 [mdd] mdd_create+0x5bd/0x1d00 [mdd] ? mdt_version_save+0xa8/0x210 [mdt] mdt_reint_open+0x337c/0x3c10 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_intent_open+0x180/0x5b0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_fixup_resent+0x2e0/0x2e0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ---[ end trace 546ab86463d48723 ]--- ODEBUG: object 00000000d8561bb4 is on stack 00000000d18f4ed7, but NOT annotated. WARNING: CPU: 4 PID: 6696 at lib/debugobjects.c:368 __debug_object_init.cold.5+0x35/0x15f Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon pcspkr i2c_piix4 rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CPU: 4 PID: 6696 Comm: mdt00_002 Kdump: loaded Tainted: G W O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:__debug_object_init.cold.5+0x35/0x15f Code: fe 83 48 83 05 33 38 0c 03 01 89 05 69 40 0c 03 65 48 8b 04 25 00 dd 01 00 48 8b 50 18 e8 43 87 99 ff 48 83 05 2b 38 0c 03 01 <0f> 0b 48 83 05 29 38 0c 03 01 48 83 05 29 38 0c 03 01 e9 7f ee ff RSP: 0018:ffffa84b446cb4a0 EFLAGS: 00010006 RAX: 0000000000000050 RBX: ffffa84b446cb5a8 RCX: 0000000000000000 RDX: 0000000000000000 RSI: ffff8cb23e31e5a8 RDI: ffff8cb23e31e5a8 RBP: ffffffff84705ca0 R08: 0000000000000000 R09: c0000000ffff7fff R10: 0000000000000001 R11: ffffa84b446cb298 R12: ffffffff85f15c28 R13: 000000000002e8c0 R14: ffffffff85f15c20 R15: ffff8cb007cd0af0 FS: 0000000000000000(0000) GS:ffff8cb23e300000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f2430d99840 CR3: 00000001687fc000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __warn+0xc8/0x150 ? __debug_object_init.cold.5+0x35/0x15f ? report_bug+0x113/0x140 ? do_error_trap+0xb6/0x130 ? do_invalid_op+0x46/0x60 ? __debug_object_init.cold.5+0x35/0x15f ? invalid_op+0x14/0x20 ? __debug_object_init.cold.5+0x35/0x15f ? lod_set_pool+0x270/0x270 [lod] debug_object_init+0x22/0x30 init_timer_key+0x28/0x120 lod_ost_alloc_qos+0x770/0x1c30 [lod] ? string_nocheck+0x77/0xa0 ? string+0x58/0x70 ? slab_post_alloc_hook+0x66/0x380 ? lod_qos_prep_create+0x390/0x1be0 [lod] ? __kmalloc+0x1b4/0x4a0 lod_qos_prep_create+0x1378/0x1be0 [lod] lod_prepare_create+0x204/0x460 [lod] ? osd_declare_create+0x4a2/0x7a0 [osd_ldiskfs] lod_declare_striped_create+0x270/0xf80 [lod] ? lod_sub_declare_create+0x111/0x320 [lod] lod_declare_create+0x3d4/0x9c0 [lod] mdd_declare_create_object_internal+0x107/0x4a0 [mdd] mdd_declare_create_object.isra.25+0x55/0xc40 [mdd] mdd_declare_create+0x6a/0x6c0 [mdd] mdd_create+0x5bd/0x1d00 [mdd] ? mdt_version_save+0xa8/0x210 [mdt] mdt_reint_open+0x337c/0x3c10 [mdt] ? old_init_ucred_common+0x1ae/0x840 [mdt] ? lustre_swab_generic_32s+0x20/0x20 [ptlrpc] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_intent_open+0x180/0x5b0 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_fixup_resent+0x2e0/0x2e0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ---[ end trace 546ab86463d48724 ]--- Lustre: 10623:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb1960b6900 x1837782393044864/t4295186571(0) o101->2f2ef574-0443-4c67-832a-a861c5c8c0a0@0@lo:516/0 lens 376/43576 e 0 to 0 dl 1752648456 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 12621:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb16cd9ad80 x1837782398252160/t4295117926(0) o101->2f2ef574-0443-4c67-832a-a861c5c8c0a0@0@lo:520/0 lens 376/46216 e 0 to 0 dl 1752648460 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 12621:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 1 previous similar message Lustre: 214246:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb16d580a80 x1837782412294400/t4295118911(0) o101->d664c5e4-c84d-42d3-8b78-136f6d1ffbc8@0@lo:541/0 lens 376/46216 e 0 to 0 dl 1752648481 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 214246:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 1 previous similar message Lustre: 11757:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb114efbf00 x1837782420933632/t4295115077(0) o101->d664c5e4-c84d-42d3-8b78-136f6d1ffbc8@0@lo:558/0 lens 376/48328 e 0 to 0 dl 1752648498 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 11757:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 5 previous similar messages LustreError: 362127:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 2 [0x20000040c:0x7d5:0x0] inode@0000000000000000: rc = -5 Lustre: 9722:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb172b79c00 x1837782442163712/t4295116522(0) o101->d664c5e4-c84d-42d3-8b78-136f6d1ffbc8@0@lo:591/0 lens 376/48328 e 0 to 0 dl 1752648531 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 9722:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 3 previous similar messages LustreError: 15826:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8cb0c6117380 x1837782475775872/t0(0) o104->lustre-OST0000@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: lustre-OST0000-osc-ffff8cb075464000: operation ost_setattr to node 0@lo failed: rc = -107 LustreError: Skipped 7 previous similar messages LustreError: lustre-OST0000-osc-ffff8cb075464000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. Lustre: 3303:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040b:0x2139:0x0]/ may get corrupted (rc -108) LustreError: 457844:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-OST0000-osc-ffff8cb075464000: namespace resource [0x2c0000402:0x3eef:0x0].0x0 (ffff8cb0e885f700) refcount nonzero (1) after lock cleanup; forcing cleanup. Lustre: 9775:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb0d8ac2300 x1837782480766720/t4295192807(0) o101->d664c5e4-c84d-42d3-8b78-136f6d1ffbc8@0@lo:659/0 lens 376/48272 e 0 to 0 dl 1752648599 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 9775:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 3 previous similar messages LustreError: 449529:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8cb06fb62680 x1837782492284928/t0(0) o104->lustre-OST0003@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 449529:0:(client.c:1375:ptlrpc_import_delay_req()) Skipped 11 previous similar messages LustreError: 10933:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1752648481 with bad export cookie 15260386110144704740 LustreError: lustre-OST0003-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0003; in progress operations using this service will fail. LustreError: 10933:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) Skipped 1 previous similar message LustreError: 442338:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8cb1939b1180 x1837782492799360/t0(0) o104->lustre-OST0003@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 442338:0:(client.c:1375:ptlrpc_import_delay_req()) Skipped 23 previous similar messages Lustre: lustre-OST0002-osc-MDT0000: update sequence from 0x340000402 to 0x340000403 Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x2c0000402 to 0x2c0000403 Lustre: 17892:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb13444db00 x1837782561673472/t4295130125(0) o101->d664c5e4-c84d-42d3-8b78-136f6d1ffbc8@0@lo:34/0 lens 376/47728 e 0 to 0 dl 1752648729 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 17892:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 19 previous similar messages Lustre: 37491:0:(out_handler.c:879:out_tx_end()) lustre-MDT0002-osd: error during execution of #2 from /home/green/git/lustre-release/lustre/ptlrpc/../../lustre/target/out_handler.c:562: rc = -2 LustreError: 37491:0:(out_lib.c:1168:out_tx_index_delete_undo()) lustre-MDT0002-osd: Oops, can not rollback index_delete yet: rc = -524 LustreError: 9701:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040c:0xfa7:0x0] doesn't exist!: rc = -14 LustreError: 13756:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040b:0x2419:0x0] doesn't exist!: rc = -14 LustreError: 12810:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040b:0x2419:0x0] doesn't exist!: rc = -14 Lustre: 15779:0:(out_handler.c:879:out_tx_end()) lustre-MDT0002-osd: error during execution of #2 from /home/green/git/lustre-release/lustre/ptlrpc/../../lustre/target/out_handler.c:562: rc = -2 LustreError: 15779:0:(out_lib.c:1168:out_tx_index_delete_undo()) lustre-MDT0002-osd: Oops, can not rollback index_delete yet: rc = -524 LustreError: 40557:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040c:0xfa7:0x0] doesn't exist!: rc = -14 LustreError: 40557:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 1 previous similar message LustreError: lustre-OST0000-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. LustreError: 443898:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8cb0b3561500 x1837782575239680/t0(0) o104->lustre-OST0000@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 443898:0:(client.c:1375:ptlrpc_import_delay_req()) Skipped 2 previous similar messages Lustre: 3297:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x3f22:0x0]// may get corrupted (rc -108) Lustre: 3297:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x280000409:0x97c:0x0]// may get corrupted (rc -108) Lustre: 3297:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040b:0x24d4:0x0]// may get corrupted (rc -108) Lustre: 3294:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x3f31:0x0]/ may get corrupted (rc -108) Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x300000402 to 0x300000403 LustreError: 475625:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-OST0000-osc-ffff8cb0755c8000: namespace resource [0x2c0000402:0x7fce:0x0].0x0 (ffff8cb1b17acd00) refcount nonzero (1) after lock cleanup; forcing cleanup. LustreError: 475625:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 82 previous similar messages LustreError: 12745:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040c:0x101a:0x0] doesn't exist!: rc = -14 LustreError: 12700:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040c:0x101a:0x0] doesn't exist!: rc = -14 LustreError: 12700:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 4 previous similar messages Lustre: lustre-OST0003-osc-MDT0001: update sequence from 0x380000401 to 0x380000403 LustreError: 13706:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040c:0x101a:0x0] doesn't exist!: rc = -14 LustreError: 13706:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 5 previous similar messages Lustre: lustre-OST0003-osc-MDT0000: update sequence from 0x380000402 to 0x380000404 Lustre: lustre-OST0001-osc-MDT0001: update sequence from 0x300000401 to 0x300000404 LustreError: 214224:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040c:0x101a:0x0] doesn't exist!: rc = -14 LustreError: 214224:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 12 previous similar messages Lustre: lustre-OST0000-osc-MDT0001: update sequence from 0x2c0000401 to 0x2c0000404 LustreError: 442502:0:(ofd_dev.c:1776:ofd_create_hdl()) lustre-OST0001: unable to precreate: rc = -28 LustreError: 8322:0:(osp_precreate.c:654:osp_precreate_send()) lustre-OST0001-osc-MDT0002: can't precreate: rc = -28 Lustre: lustre-OST0002-osc-MDT0002: update sequence from 0x340000400 to 0x340000404 Lustre: lustre-OST0000-osc-MDT0002: update sequence from 0x2c0000400 to 0x2c0000405 ptlrpc_watchdog_fire: 12 callbacks suppressed Lustre: mdt00_057: service thread pid 349154 was inactive for 40.798 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt00_017 state:I task:ll_ost_out00_00 state:I stack:0 pid:37487 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_pdo_lock+0x535/0x910 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] Lustre: Skipped 2 previous similar messages task:mdt00_057 state:I stack:0 pid:349154 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_parent_lock+0x8f/0x370 [mdt] ? mdt_name_unpack+0xc6/0x140 [mdt] ? lu_name_is_valid_len+0x5e/0x80 [mdt] mdt_getattr_name_lock+0x278a/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] stack:0 pid:11699 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] ? lustre_msg_buf+0x1b/0x70 [ptlrpc] ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? kmalloc_order+0xfb/0x120 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? __req_capsule_get+0x44e/0xa50 [ptlrpc] ? lustre_swab_ldlm_lock_desc+0x90/0x90 [ptlrpc] mdt_batch_getattr+0xf6/0x1f0 [mdt] mdt_batch+0x7ee/0x20a9 [mdt] ? lustre_msg_get_tag+0x1/0x110 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 mdt_object_pdo_lock+0x409/0x910 [mdt] mdt_getattr_name_lock+0x274f/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_parent_lock+0x8f/0x370 [mdt] mdt_reint_unlink+0x234/0x1a30 [mdt] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Lustre: ll_ost_out00_00: service thread pid 7486 was inactive for 43.356 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 4 previous similar messages Lustre: lustre-OST0002-osc-MDT0001: update sequence from 0x340000401 to 0x340000405 Lustre: mdt_out00_004: service thread pid 12324 was inactive for 43.445 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: 91511:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 6/24/0, destroy: 1/4/0 Lustre: 91511:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 5793 previous similar messages Lustre: 91511:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2845/2845/0, xattr_set: 4267/39936/0 Lustre: 91511:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 5793 previous similar messages Lustre: 91511:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 28/157/0, punch: 0/0/0, quota 1/3/0 Lustre: 91511:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 5793 previous similar messages Lustre: 91511:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 7/118/0, delete: 2/5/0 Lustre: 91511:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 5793 previous similar messages Lustre: 91511:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 1/1/0, ref_del: 2/2/1 Lustre: 91511:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 5793 previous similar messages LustreError: 6684:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0002_UUID lock: ffff8cb094ec7200/0xd3c7c850702e2d96 lrc: 3/0,0 mode: PR/PR res: [0x280000408:0x1251:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xd3c7c850702e2d03 expref: 657 pid: 15779 timeout: 3320 lvb_type: 0 LustreError: 6684:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 3 previous similar messages LustreError: 11699:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 000000004cf82c85 ns: mdt-lustre-MDT0002_UUID lock: ffff8cb1e0bb4400/0xd3c7c850702e36c6 lrc: 3/0,0 mode: PR/PR res: [0x280000403:0x1:0x0].0x0 bits 0x12/0x0 rrc: 13 type: IBT gid 0 flags: 0x50200400000020 nid: 0@lo remote: 0xd3c7c850702e3672 expref: 203 pid: 11699 timeout: 0 lvb_type: 0 LustreError: 11699:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) Skipped 7 previous similar messages Lustre: mdt_out00_004: service thread pid 12324 completed after 72.204s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: ll_ost_out00_00: service thread pid 37487 completed after 102.438s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: ll_ost_out00_00: service thread pid 7486 completed after 92.618s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: lustre-MDT0002-mdc-ffff8cb0755c8000: Connection to lustre-MDT0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 3 previous similar messages LustreError: 493208:0:(lmv_obd.c:1468:lmv_statfs()) lustre-MDT0002-mdc-ffff8cb0755c8000: can't stat MDS #0: rc = -11 Lustre: mdt_out00_005: service thread pid 28106 completed after 72.362s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0002-mdc-ffff8cb0755c8000: This client was evicted by lustre-MDT0002; in progress operations using this service will fail. LustreError: 485444:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x280000403:0x1:0x0] error: rc = -5 Lustre: mdt00_017: service thread pid 11699 completed after 102.597s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 485444:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 341 previous similar messages LustreError: 485504:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8cb0755c8000: inode [0x280000409:0xb36:0x0] mdc close failed: rc = -108 LustreError: 485504:0:(file.c:248:ll_close_inode_openhandle()) Skipped 88 previous similar messages Lustre: mdt00_057: service thread pid 349154 completed after 102.583s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 493279:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0002-mdc-ffff8cb0755c8000: [0x280000402:0x92:0x0] lock enqueue fails: rc = -108 Lustre: dir [0x20000040c:0x136c:0x0] stripe 2 readdir failed: -108, directory is partially accessed! Lustre: Skipped 64 previous similar messages Lustre: lustre-MDT0002-mdc-ffff8cb0755c8000: Connection restored to (at 0@lo) Lustre: Skipped 3 previous similar messages LustreError: 17892:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040c:0xfa7:0x0] doesn't exist!: rc = -14 Lustre: lustre-OST0003-osc-MDT0002: update sequence from 0x380000400 to 0x380000405 Lustre: 12250:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0001: opcode 0: before 515 < left 2262, rollback = 0 Lustre: 12250:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 5114 previous similar messages LustreError: 442480:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8cb0c732f380 x1837782699761920/t0(0) o104->lustre-OST0002@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 442480:0:(client.c:1375:ptlrpc_import_delay_req()) Skipped 73 previous similar messages LustreError: lustre-OST0002-osc-ffff8cb075464000: This client was evicted by lustre-OST0002; in progress operations using this service will fail. Lustre: 3302:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x444b:0x0]/ may get corrupted (rc -5) Lustre: 3302:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x28000040a:0x3e:0x0]/ may get corrupted (rc -108) Lustre: 3300:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x280000408:0x13b3:0x0]// may get corrupted (rc -108) Lustre: 437198:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb0c9514d00 x1837782711477760/t4295135900(0) o101->2f2ef574-0443-4c67-832a-a861c5c8c0a0@0@lo:316/0 lens 376/48536 e 0 to 0 dl 1752649011 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 437198:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 27 previous similar messages Lustre: lustre-OST0001-osc-MDT0002: update sequence from 0x300000400 to 0x300000405 Lustre: 68125:0:(out_handler.c:879:out_tx_end()) lustre-MDT0002-osd: error during execution of #2 from /home/green/git/lustre-release/lustre/ptlrpc/../../lustre/target/out_handler.c:532: rc = -17 LustreError: 68125:0:(out_lib.c:1168:out_tx_index_delete_undo()) lustre-MDT0002-osd: Oops, can not rollback index_delete yet: rc = -524 Lustre: 28106:0:(out_handler.c:879:out_tx_end()) lustre-MDT0002-osd: error during execution of #2 from /home/green/git/lustre-release/lustre/ptlrpc/../../lustre/target/out_handler.c:562: rc = -2 LustreError: 28106:0:(out_lib.c:1168:out_tx_index_delete_undo()) lustre-MDT0002-osd: Oops, can not rollback index_delete yet: rc = -524 LustreError: 12071:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040c:0x15f3:0x0] doesn't exist!: rc = -14 LustreError: 12071:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 47 previous similar messages Lustre: lustre-OST0002-osc-MDT0000: update sequence from 0x340000403 to 0x340000406 LustreError: 451381:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1752649000 with bad export cookie 15260386110175962799 LustreError: lustre-OST0002-osc-ffff8cb075464000: This client was evicted by lustre-OST0002; in progress operations using this service will fail. LustreError: 443913:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8cb116348700 x1837782782252544/t0(0) o104->lustre-OST0002@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 Lustre: 3294:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x280000408:0x16c6:0x0]// may get corrupted (rc -108) LustreError: 443913:0:(client.c:1375:ptlrpc_import_delay_req()) Skipped 60 previous similar messages LustreError: 442508:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8cb13c85f380 x1837782826517376/t0(0) o104->lustre-OST0003@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 442508:0:(client.c:1375:ptlrpc_import_delay_req()) Skipped 9 previous similar messages LustreError: lustre-OST0003-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0003; in progress operations using this service will fail. LustreError: lustre-OST0003-osc-ffff8cb0755c8000: operation ost_punch to node 0@lo failed: rc = -107 LustreError: Skipped 22 previous similar messages Lustre: 3306:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040c:0x189c:0x0]/ may get corrupted (rc -108) Lustre: 3301:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040b:0x2e68:0x0]// may get corrupted (rc -108) LustreError: 520485:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-OST0003-osc-ffff8cb0755c8000: namespace resource [0x380000405:0x1c0b:0x0].0x0 (ffff8cb1c37d5200) refcount nonzero (1) after lock cleanup; forcing cleanup. LustreError: 520485:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 413 previous similar messages Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x2c0000403 to 0x2c0000406 Lustre: lustre-OST0002-osc-MDT0002: update sequence from 0x340000404 to 0x340000407 LustreError: 442508:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8cb179d8b800 x1837782846314368/t0(0) o106->lustre-OST0002@0@lo:15/16 lens 328/280 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 442508:0:(client.c:1375:ptlrpc_import_delay_req()) Skipped 47 previous similar messages LustreError: lustre-OST0002-osc-ffff8cb075464000: This client was evicted by lustre-OST0002; in progress operations using this service will fail. LustreError: lustre-OST0003-osc-ffff8cb075464000: This client was evicted by lustre-OST0003; in progress operations using this service will fail. Lustre: 3300:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040c:0x1960:0x0]// may get corrupted (rc -5) Lustre: 3302:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x280000408:0x1880:0x0]// may get corrupted (rc -108) Lustre: lustre-OST0003-osc-MDT0001: update sequence from 0x380000403 to 0x380000406 Lustre: lustre-OST0000-osc-MDT0002: update sequence from 0x2c0000405 to 0x2c0000407 LustreError: 13706:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040c:0xfa7:0x0] doesn't exist!: rc = -14 LustreError: 13706:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 150 previous similar messages LustreError: 10045:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 00000000b0248b8e ns: mdt-lustre-MDT0000_UUID lock: ffff8cb174002a00/0xd3c7c85070c28df7 lrc: 3/0,0 mode: PR/PR res: [0x20000040c:0x1955:0x0].0x0 bits 0x1b/0x0 rrc: 2 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xd3c7c85070aee758 expref: 516 pid: 10045 timeout: 0 lvb_type: 0 LustreError: 10045:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff8cb075464000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 528513:0:(llite_lib.c:3786:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 528513:0:(llite_lib.c:3786:ll_prep_inode()) Skipped 45 previous similar messages Lustre: lustre-OST0001-osc-MDT0001: update sequence from 0x300000404 to 0x300000406 Lustre: lustre-OST0000-osc-MDT0001: update sequence from 0x2c0000404 to 0x2c0000408 LustreError: 17935:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8cb0b8e8f700 x1837782888122240/t0(0) o105->lustre-OST0000@0@lo:15/16 lens 392/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 17935:0:(client.c:1375:ptlrpc_import_delay_req()) Skipped 85 previous similar messages LustreError: lustre-OST0000-osc-ffff8cb075464000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. LustreError: 448514:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1752649285 with bad export cookie 15260386110144704747 LustreError: 448514:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) Skipped 2 previous similar messages LustreError: lustre-OST0001-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0001; in progress operations using this service will fail. Lustre: 3306:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x280000408:0x19f1:0x0]// may get corrupted (rc -108) Lustre: 3307:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040b:0x3074:0x0]// may get corrupted (rc -108) Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x300000403 to 0x300000407 LustreError: 530636:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-OST0001-osc-ffff8cb0755c8000: namespace resource [0x300000404:0x72fc:0x0].0x0 (ffff8cb0dfd21400) refcount nonzero (1) after lock cleanup; forcing cleanup. LustreError: 530636:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 590 previous similar messages Lustre: lustre-OST0002-osc-MDT0001: update sequence from 0x340000405 to 0x340000408 Lustre: 11535:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb13a652300 x1837782909363328/t4295151124(0) o101->2f2ef574-0443-4c67-832a-a861c5c8c0a0@0@lo:47/0 lens 376/48536 e 0 to 0 dl 1752649497 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 11535:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 47 previous similar messages Lustre: lustre-OST0003-osc-MDT0000: update sequence from 0x380000404 to 0x380000407 Lustre: 13830:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 6/24/0, destroy: 1/4/0 Lustre: 13830:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1585 previous similar messages Lustre: 13830:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1213/1213/0, xattr_set: 1820/17195/0 Lustre: 13830:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1585 previous similar messages Lustre: 13830:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 28/157/0, punch: 0/0/0, quota 1/3/0 Lustre: 13830:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1585 previous similar messages Lustre: 13830:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 8/134/0, delete: 3/6/0 Lustre: 13830:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1585 previous similar messages Lustre: 13830:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 1/1/0, ref_del: 1/1/0 Lustre: 13830:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1585 previous similar messages Lustre: 6697:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0000: opcode 0: before 510 < left 2520, rollback = 0 Lustre: 6697:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1559 previous similar messages LustreError: 6684:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: filter-lustre-OST0002_UUID lock: ffff8cb15791c800/0xd3c7c85070d00324 lrc: 3/0,0 mode: PW/PW res: [0x340000407:0x1edb:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000400010020 nid: 0@lo remote: 0xd3c7c85070d0031d expref: 23686 pid: 442341 timeout: 3993 lvb_type: 0 LustreError: 6684:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 8 previous similar messages LustreError: 442718:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8cb1291fe580 x1837782953385600/t0(0) o104->lustre-OST0002@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 442718:0:(client.c:1375:ptlrpc_import_delay_req()) Skipped 367 previous similar messages Lustre: lustre-OST0002-osc-ffff8cb0755c8000: Connection to lustre-OST0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 8 previous similar messages LustreError: lustre-OST0002-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0002; in progress operations using this service will fail. Lustre: lustre-OST0002-osc-ffff8cb0755c8000: Connection restored to (at 0@lo) Lustre: Skipped 8 previous similar messages LustreError: lustre-OST0003-osc-ffff8cb075464000: This client was evicted by lustre-OST0003; in progress operations using this service will fail. Lustre: 3299:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x280000408:0x1c97:0x0]// may get corrupted (rc -108) LustreError: 11189:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1752649641 with bad export cookie 15260386110181846061 LustreError: 11189:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) Skipped 2 previous similar messages LustreError: lustre-OST0002-osc-ffff8cb075464000: This client was evicted by lustre-OST0002; in progress operations using this service will fail. Lustre: 3292:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x4fec:0x0]/ may get corrupted (rc -108) Lustre: 3293:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040d:0x56b:0x0]// may get corrupted (rc -108) Lustre: 3294:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x280000408:0x1e0a:0x0]// may get corrupted (rc -108) LustreError: 557675:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-OST0002-osc-ffff8cb075464000: namespace resource [0x340000408:0x32e8:0x0].0x0 (ffff8cb1a31d1500) refcount nonzero (1) after lock cleanup; forcing cleanup. LustreError: 557675:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 431 previous similar messages Lustre: lustre-OST0003-osc-MDT0002: update sequence from 0x380000405 to 0x380000408 LustreError: 170244:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040c:0x15f3:0x0] doesn't exist!: rc = -14 LustreError: 170244:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 195 previous similar messages Lustre: lustre-OST0001-osc-MDT0002: update sequence from 0x300000405 to 0x300000408 Lustre: lustre-OST0002-osc-MDT0000: update sequence from 0x340000406 to 0x340000409 LustreError: 445889:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8cb0d841e200 x1837783089573888/t0(0) o104->lustre-OST0002@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 445889:0:(client.c:1375:ptlrpc_import_delay_req()) Skipped 134 previous similar messages LustreError: lustre-OST0002-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0002; in progress operations using this service will fail. Lustre: 3296:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040b:0x3888:0x0]/ may get corrupted (rc -108) Lustre: 3298:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000407:0x2146:0x0]/ may get corrupted (rc -108) Lustre: 3298:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x529b:0x0]// may get corrupted (rc -108) Lustre: 3301:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040d:0x8ad:0x0]/ may get corrupted (rc -108) LustreError: lustre-OST0003-osc-ffff8cb075464000: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: Skipped 130 previous similar messages LustreError: lustre-OST0003-osc-ffff8cb075464000: This client was evicted by lustre-OST0003; in progress operations using this service will fail. Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x2c0000406 to 0x2c0000409 Lustre: lustre-OST0003-osc-MDT0001: update sequence from 0x380000406 to 0x380000409 Lustre: lustre-OST0002-osc-MDT0002: update sequence from 0x340000407 to 0x34000040a Lustre: lustre-OST0000-osc-MDT0001: update sequence from 0x2c0000408 to 0x2c000040a Lustre: lustre-OST0000-osc-MDT0002: update sequence from 0x2c0000407 to 0x2c000040b Lustre: lustre-OST0001-osc-MDT0001: update sequence from 0x300000406 to 0x300000409 Lustre: lustre-OST0002-osc-MDT0001: update sequence from 0x340000408 to 0x34000040b Lustre: 12621:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb0fe0a5400 x1837783167852800/t4295174403(0) o101->d664c5e4-c84d-42d3-8b78-136f6d1ffbc8@0@lo:569/0 lens 376/48536 e 0 to 0 dl 1752650019 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 12621:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 52 previous similar messages ptlrpc_watchdog_fire: 3 callbacks suppressed Lustre: mdt_io00_002: service thread pid 6709 was inactive for 43.831 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:ll_ost_out00_00 state:I task:ll_ost_out00_00 state:I stack:0 pid:15707 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] Lustre: Skipped 2 previous similar messages task:mdt_io00_002 state:I stack:0 pid:6709 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ? mdt_obd_postrecov+0x100/0x100 [mdt] ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] mdt_object_pdo_lock+0x535/0x910 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] Lustre: ll_ost_out00_00: service thread pid 37487 was inactive for 42.940 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 1 previous similar message ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] mdt_rename_source_lock+0x6b/0x180 [mdt] mdt_reint_rename+0xd38/0x34e0 [mdt] ? sptlrpc_svc_alloc_rs+0x70/0x460 [ptlrpc] ? lustre_pack_reply_v2+0x210/0x380 [ptlrpc] ? mdt_ucred+0x19/0x30 [mdt] ? ucred_set_audit_enabled.isra.12+0x10/0xa0 [mdt] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] stack:0 pid:15779 ppid:2 flags:0x80004080 ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_parent_lock+0x8f/0x370 [mdt] ? mdt_name_unpack+0xc6/0x140 [mdt] ? lu_name_is_valid_len+0x5e/0x80 [mdt] mdt_getattr_name_lock+0x278a/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ptlrpc_main+0xd30/0x1450 [ptlrpc] ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] ? lustre_msg_buf+0x1b/0x70 [ptlrpc] ? __req_capsule_get+0x44e/0xa50 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] ? lustre_swab_ldlm_lock_desc+0x90/0x90 [ptlrpc] mdt_batch_getattr+0xf6/0x1f0 [mdt] Call Trace: mdt_batch+0x7ee/0x20a9 [mdt] __schedule+0x351/0xcb0 ? lustre_msg_get_last_committed+0x110/0x110 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] schedule+0xc0/0x180 ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_pdo_lock+0x535/0x910 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_parent_lock+0x8f/0x370 [mdt] ? mdt_name_unpack+0xc6/0x140 [mdt] ? lu_name_is_valid_len+0x5e/0x80 [mdt] mdt_getattr_name_lock+0x278a/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] ? lustre_msg_buf+0x1b/0x70 [ptlrpc] ? __req_capsule_get+0x44e/0xa50 [ptlrpc] ? lustre_swab_ldlm_lock_desc+0x90/0x90 [ptlrpc] mdt_batch_getattr+0xf6/0x1f0 [mdt] mdt_batch+0x7ee/0x20a9 [mdt] ? lustre_msg_get_tag+0x1/0x110 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x300000407 to 0x30000040a Lustre: 17907:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 6/24/0, destroy: 1/4/0 Lustre: 17907:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1777 previous similar messages Lustre: 17907:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2721/2721/0, xattr_set: 4081/38200/0 Lustre: 17907:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1777 previous similar messages Lustre: 17907:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 28/157/0, punch: 0/0/0, quota 1/3/0 Lustre: 17907:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1777 previous similar messages Lustre: 17907:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 7/118/0, delete: 2/5/1 Lustre: 17907:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1777 previous similar messages Lustre: 17907:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 1/1/0, ref_del: 2/2/0 Lustre: 17907:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1777 previous similar messages Lustre: mdt00_047: service thread pid 91486 was inactive for 75.738 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. Lustre: Skipped 1 previous similar message Lustre: lustre-OST0003-osc-MDT0000: update sequence from 0x380000407 to 0x38000040a Lustre: mdt_io00_002: service thread pid 6709 completed after 100.156s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: ll_ost_out00_00: service thread pid 37487 completed after 99.272s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 15779:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 000000004677e0a0 ns: mdt-lustre-MDT0000_UUID lock: ffff8cb1c3a9de00/0xd3c7c850718d71aa lrc: 3/0,0 mode: PR/PR res: [0x20000040d:0xa8e:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xd3c7c850717019a7 expref: 209 pid: 15779 timeout: 0 lvb_type: 0 LustreError: 15779:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) Skipped 3 previous similar messages Lustre: ll_ost_out00_00: service thread pid 15779 completed after 99.609s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffff8cb0755c8000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. Lustre: mdt_io00_019: service thread pid 13972 completed after 99.071s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_047: service thread pid 91486 completed after 99.353s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: ll_ost_out00_00: service thread pid 15707 completed after 99.667s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 593659:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff8cb0755c8000: inode [0x20000040b:0x3b86:0x0] mdc close failed: rc = -5 LustreError: 593659:0:(file.c:248:ll_close_inode_openhandle()) Skipped 25 previous similar messages LustreError: 586564:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000403:0x2:0x0] error: rc = -5 LustreError: 586564:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 33 previous similar messages Lustre: 12071:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0000: opcode 0: before 515 < left 3929, rollback = 0 Lustre: 12071:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1831 previous similar messages LustreError: 6684:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: filter-lustre-OST0002_UUID lock: ffff8cb122cdd400/0xd3c7c8507180c252 lrc: 3/0,0 mode: PW/PW res: [0x34000040a:0x1975:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000480030020 nid: 0@lo remote: 0xd3c7c85071807eb3 expref: 16565 pid: 442747 timeout: 4615 lvb_type: 0 LustreError: 6684:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 5 previous similar messages Lustre: lustre-OST0002-osc-ffff8cb0755c8000: Connection to lustre-OST0002 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 5 previous similar messages LustreError: lustre-OST0002-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0002; in progress operations using this service will fail. Lustre: 3299:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x59cc:0x0]/ may get corrupted (rc -108) Lustre: 3297:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x59c0:0x0]/ may get corrupted (rc -108) Lustre: 3304:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040e:0x14f:0x0]// may get corrupted (rc -108) Lustre: 3299:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040e:0xfe:0x0]/ may get corrupted (rc -108) Lustre: 3299:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x5a31:0x0]/ may get corrupted (rc -108) Lustre: 3306:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040e:0x15b:0x0]// may get corrupted (rc -108) Lustre: 3294:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x59fe:0x0]/ may get corrupted (rc -108) Lustre: 3297:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x5a47:0x0]/ may get corrupted (rc -108) Lustre: 3303:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040e:0x134:0x0]// may get corrupted (rc -108) Lustre: 3305:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000407:0x28a2:0x0]/ may get corrupted (rc -108) LustreError: lustre-OST0003-osc-ffff8cb075464000: This client was evicted by lustre-OST0003; in progress operations using this service will fail. Lustre: lustre-OST0002-osc-ffff8cb0755c8000: Connection restored to (at 0@lo) Lustre: Skipped 5 previous similar messages Lustre: 3295:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000407:0x2915:0x0]// may get corrupted (rc -108) Lustre: 3306:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000407:0x28ae:0x0]// may get corrupted (rc -108) Lustre: 3304:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040d:0xc00:0x0]/ may get corrupted (rc -108) Lustre: 3307:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x28000040a:0x12e4:0x0]/ may get corrupted (rc -108) Lustre: 3293:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040d:0xc5b:0x0]// may get corrupted (rc -108) Lustre: 3293:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040d:0xc54:0x0]// may get corrupted (rc -108) LustreError: 7782:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1752650241 with bad export cookie 15260386110171266632 LustreError: 7782:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) Skipped 3 previous similar messages LustreError: lustre-OST0000-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. Lustre: lustre-OST0002-osc-MDT0000: update sequence from 0x340000409 to 0x34000040c LustreError: 615095:0:(ldlm_resource.c:981:ldlm_resource_complain()) lustre-OST0000-osc-ffff8cb0755c8000: namespace resource [0x2c000040b:0x3375:0x0].0x0 (ffff8cb164560d00) refcount nonzero (1) after lock cleanup; forcing cleanup. LustreError: 615095:0:(ldlm_resource.c:981:ldlm_resource_complain()) Skipped 399 previous similar messages LustreError: 6693:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040c:0x101a:0x0] doesn't exist!: rc = -14 LustreError: 6693:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 383 previous similar messages Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x2c0000409 to 0x2c000040c Lustre: lustre-OST0001-osc-MDT0001: update sequence from 0x300000409 to 0x30000040b Lustre: lustre-OST0003-osc-MDT0002: update sequence from 0x380000408 to 0x38000040b Lustre: lustre-OST0001-osc-MDT0002: update sequence from 0x300000408 to 0x30000040c Lustre: lustre-OST0000-osc-MDT0001: update sequence from 0x2c000040a to 0x2c000040d Lustre: lustre-OST0003-osc-MDT0001: update sequence from 0x380000409 to 0x38000040c Lustre: lustre-OST0002-osc-MDT0001: update sequence from 0x34000040b to 0x34000040d LustreError: 449702:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff8cb1b625ec80 x1837783393561856/t0(0) o104->lustre-OST0000@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: lustre-OST0000-osc-ffff8cb075464000: operation ldlm_enqueue to node 0@lo failed: rc = -107 LustreError: 449702:0:(client.c:1375:ptlrpc_import_delay_req()) Skipped 269 previous similar messages LustreError: Skipped 39 previous similar messages LustreError: lustre-OST0000-osc-ffff8cb075464000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. Lustre: 3305:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040d:0x1382:0x0]/ may get corrupted (rc -108) Lustre: 3305:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000407:0x302c:0x0]/ may get corrupted (rc -108) Lustre: 3305:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000407:0x3032:0x0]/ may get corrupted (rc -108) Lustre: 3301:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040d:0x1338:0x0]/ may get corrupted (rc -108) Lustre: 3303:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040d:0x137a:0x0]/ may get corrupted (rc -108) Lustre: 3305:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040e:0x7b8:0x0]/ may get corrupted (rc -108) Lustre: 3303:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x280000408:0x2894:0x0]// may get corrupted (rc -108) LustreError: lustre-OST0003-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0003; in progress operations using this service will fail. Lustre: 3303:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x61a9:0x0]/ may get corrupted (rc -108) Lustre: 3303:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040e:0x849:0x0]/ may get corrupted (rc -108) Lustre: 3300:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x619e:0x0]// may get corrupted (rc -108) Lustre: lustre-OST0003-osc-MDT0000: update sequence from 0x38000040a to 0x38000040d Lustre: lustre-OST0001-osc-MDT0000: update sequence from 0x30000040a to 0x30000040d Lustre: 9775:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff8cb114e10e00 x1837783466777216/t4295204699(0) o101->d664c5e4-c84d-42d3-8b78-136f6d1ffbc8@0@lo:405/0 lens 376/48536 e 0 to 0 dl 1752650610 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 Lustre: 9775:0:(mdt_recovery.c:102:mdt_req_from_lrd()) Skipped 58 previous similar messages LustreError: lustre-OST0002-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0002; in progress operations using this service will fail. LustreError: 445864:0:(ldlm_lockd.c:1363:ldlm_handle_enqueue()) ### lock on disconnected export 00000000c0664090 ns: filter-lustre-OST0002_UUID lock: ffff8cb1456daa00/0xd3c7c850723c870c lrc: 2/0,0 mode: --/PW res: [0x34000040c:0x26ac:0x0].0x0 rrc: 3 type: EXT [0->0] (req 0->0) gid 0 flags: 0x40000000000000 nid: local remote: 0xd3c7c850723c7841 expref: -99 pid: 445864 timeout: 0 lvb_type: 0 Lustre: 3306:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x28000040a:0x1a3b:0x0]/ may get corrupted (rc -5) Lustre: lustre-OST0002-osc-MDT0002: update sequence from 0x34000040a to 0x34000040e Lustre: 214224:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 6/24/0, destroy: 1/4/0 Lustre: 214224:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 4723 previous similar messages Lustre: 214224:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 2155/2155/0, xattr_set: 3232/30276/0 Lustre: 214224:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 4723 previous similar messages Lustre: 214224:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 28/157/0, punch: 0/0/0, quota 1/3/0 Lustre: 214224:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 4723 previous similar messages Lustre: 214224:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 7/118/0, delete: 2/5/1 Lustre: 214224:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 4723 previous similar messages Lustre: 214224:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 1/1/0, ref_del: 2/2/1 Lustre: 214224:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 4723 previous similar messages Lustre: lustre-OST0000-osc-MDT0002: update sequence from 0x2c000040b to 0x2c000040e Lustre: 10623:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-MDT0002: opcode 0: before 515 < left 3077, rollback = 0 Lustre: 10623:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 4763 previous similar messages LustreError: 6684:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 100s: evicting client at 0@lo ns: filter-lustre-OST0001_UUID lock: ffff8cb16e6c3c00/0xd3c7c85072474ed3 lrc: 3/0,0 mode: PW/PW res: [0x30000040d:0x11f4:0x0].0x0 rrc: 3 type: EXT [0->18446744073709551615] (req 0->18446744073709551615) gid 0 flags: 0x60000400000020 nid: 0@lo remote: 0xd3c7c85072473278 expref: 24862 pid: 447461 timeout: 5251 lvb_type: 0 LustreError: 6684:0:(ldlm_lockd.c:257:expired_lock_main()) Skipped 5 previous similar messages Lustre: lustre-OST0001-osc-ffff8cb0755c8000: Connection to lustre-OST0001 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: Skipped 6 previous similar messages LustreError: lustre-OST0003-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0003; in progress operations using this service will fail. Lustre: 3295:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x240000403:0x6512:0x0]// may get corrupted (rc -108) LustreError: lustre-OST0001-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0001; in progress operations using this service will fail. Lustre: lustre-OST0003-osc-ffff8cb0755c8000: Connection restored to (at 0@lo) Lustre: Skipped 5 previous similar messages Lustre: lustre-OST0002-osc-MDT0000: update sequence from 0x34000040c to 0x34000040f LustreError: lustre-OST0000-osc-ffff8cb075464000: This client was evicted by lustre-OST0000; in progress operations using this service will fail. LustreError: lustre-OST0002-osc-ffff8cb075464000: This client was evicted by lustre-OST0002; in progress operations using this service will fail. Lustre: 3304:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040d:0x1a30:0x0]/ may get corrupted (rc -108) Lustre: 3292:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040d:0x1a6d:0x0]/ may get corrupted (rc -108) Lustre: 3292:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040e:0xd1a:0x0]/ may get corrupted (rc -108) Lustre: 3307:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x280000408:0x30c5:0x0]// may get corrupted (rc -108) Lustre: 3307:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040d:0x1a63:0x0]/ may get corrupted (rc -108) Lustre: 3295:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040d:0x1a42:0x0]/ may get corrupted (rc -108) Lustre: 3305:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x28000040a:0x1d7d:0x0]/ may get corrupted (rc -108) Lustre: 3307:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040e:0xcea:0x0]/ may get corrupted (rc -108) Lustre: lustre-OST0001-osc-MDT0001: update sequence from 0x30000040b to 0x30000040e Lustre: lustre-OST0000-osc-MDT0000: update sequence from 0x2c000040c to 0x2c000040f Lustre: lustre-OST0002-osc-MDT0001: update sequence from 0x34000040d to 0x340000410 Lustre: lustre-OST0003-osc-MDT0001: update sequence from 0x38000040c to 0x38000040e LustreError: 19849:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1752650892 with bad export cookie 15260386110210021873 LustreError: 19849:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) Skipped 13 previous similar messages LustreError: lustre-OST0003-osc-ffff8cb0755c8000: This client was evicted by lustre-OST0003; in progress operations using this service will fail. Lustre: 3293:0:(llite_lib.c:4231:ll_dirty_page_discard_warn()) lustre: dirty page discard: 192.168.123.97@tcp:/lustre/fid: [0x20000040e:0xe26:0x0]/ may get corrupted (rc -108) LustreError: 12071:0:(mdt_open.c:1315:mdt_cross_open()) lustre-MDT0000: [0x20000040c:0xfa7:0x0] doesn't exist!: rc = -14 LustreError: 12071:0:(mdt_open.c:1315:mdt_cross_open()) Skipped 443 previous similar messages Lustre: lustre-OST0000-osc-MDT0001: update sequence from 0x2c000040d to 0x2c0000410 | Link to test |
racer test 1: racer on clients: centos-10.localnet DURATION=2700 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP DEBUG_PAGEALLOC CPU: 10 PID: 200031 Comm: ll_sa_200015 Kdump: loaded Tainted: G O -------- - - 4.18.0rocky8.10-debug #1 Hardware name: Red Hat KVM, BIOS 1.16.0-4.module+el8.9.0+1408+7b966129 04/01/2014 RIP: 0010:_atomic_dec_and_lock+0x2/0xa0 Code: 02 01 e8 e1 cd 87 ff 48 83 05 a9 53 ce 02 01 39 05 67 34 75 01 77 cf 48 83 05 a9 53 ce 02 01 5b c3 90 90 90 90 90 90 90 55 53 <8b> 07 48 83 05 b4 53 ce 02 01 83 f8 01 74 2b 48 83 05 b7 53 ce 02 RSP: 0018:ffffb65a4f247e90 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008020001e RDX: 000000008020001f RSI: ffff963335654e88 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9632fa699600 R11: 000000000000b988 R12: ffff963335654e40 R13: ffff9632fa6996b8 R14: ffff963335654b08 R15: ffff963335654e88 FS: 0000000000000000(0000) GS:ffff9634b2480000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000001907f3000 CR4: 00000000000006e0 Call Trace: ? show_regs.cold.9+0x22/0x2f ? __die_body+0x22/0x90 ? __die+0x33/0x4a ? no_context+0x30f/0x5a0 ? update_load_avg+0x9f/0xa40 ? __bad_area_nosemaphore+0x1c6/0x260 ? bad_area_nosemaphore+0x1a/0x30 ? do_user_addr_fault+0x540/0x8a0 ? __do_page_fault+0x6b/0xa0 ? do_page_fault+0x87/0x30f ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0xa0 ll_statahead_thread+0x1100/0x15e0 [lustre] ? ll_statahead_by_list+0xce0/0xce0 [lustre] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 Modules linked in: loop zfs(O) spl(O) lustre(O) osp(O) ofd(O) lod(O) mdt(O) mdd(O) mgs(O) osd_ldiskfs(O) ldiskfs(O) lquota(O) lfsck(O) obdecho(O) mgc(O) mdc(O) lov(O) osc(O) lmv(O) fid(O) fld(O) ptlrpc_gss(O) ptlrpc(O) obdclass(O) ksocklnd(O) crc32_generic lnet(O) dm_flakey libcfs(O) virtio_balloon i2c_piix4 pcspkr rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver ata_generic ata_piix serio_raw libata dm_mirror dm_region_hash dm_log dm_mod sha512_ssse3 sha512_generic CR2: 0000000000000008 | Lustre: 7981:0:(mdt_recovery.c:102:mdt_req_from_lrd()) @@@ restoring transno req@ffff9632a7ea8e00 x1837697169812352/t4294967461(0) o101->9671a3e0-d991-4aa4-91af-738daf1b19d9@0@lo:517/0 lens 376/864 e 0 to 0 dl 1752564652 ref 1 fl Interpret:H/602/0 rc 0/0 job:'dd.0' uid:0 gid:0 projid:0 1[8282]: segfault at 56543b744000 ip 000056543b744000 sp 00007ffd64b5c498 error 14 in 8[56543b944000+1000] Code: Unable to access opcode bytes at RIP 0x56543b743fd6. hrtimer: interrupt took 12800723 ns Lustre: 6010:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6010:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6010:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6010:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 6010:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6010:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6009:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6009:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 6009:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6009:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6009:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6009:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6009:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6009:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6009:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6009:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6009:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6009:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 13297:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0x166:0x0] with magic=0xbd60bd0 Lustre: 8200:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000401:0xf1:0x0] with magic=0xbd60bd0 Lustre: 8200:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: 6010:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 6010:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 6010:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6010:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6010:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 6010:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6010:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 6010:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6010:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6010:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6010:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6010:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6010:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 515 < left 618, rollback = 7 Lustre: 6010:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 1 previous similar message Lustre: 6010:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6010:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6010:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 6010:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6010:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6010:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6010:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6010:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 6010:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6010:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 1 previous similar message Lustre: 12011:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 12011:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 7 previous similar messages Lustre: 12011:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 12011:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 12011:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 12011:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 12011:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 12011:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 12011:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 12011:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: 12011:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 12011:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 7 previous similar messages Lustre: mdt00_006: service thread pid 8005 was inactive for 43.109 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: Lustre: mdt00_014: service thread pid 12303 was inactive for 42.704 seconds. Watchdog stack traces are limited to 3 per 300 seconds, skipping this one. task:mdt00_003 state:I task:mdt00_012 state:I stack:0 pid:9806 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_try+0xae/0x310 [mdt] mdt_getattr_name_lock+0x2246/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] Lustre: Skipped 2 previous similar messages tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 stack:0 pid:6948 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] task:mdt00_006 state:I ? do_raw_spin_unlock+0x75/0x190 ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] stack:0 pid:8005 ppid:2 flags:0x80004080 ? mdt_obd_postrecov+0x100/0x100 [mdt] Call Trace: mdt_object_lock_internal+0x20b/0x5a0 [mdt] __schedule+0x351/0xcb0 ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock_try+0xae/0x310 [mdt] mdt_getattr_name_lock+0x2246/0x3350 [mdt] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] schedule+0xc0/0x180 ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_lock+0x9e/0x240 [mdt] mdt_intent_getxattr+0x9f/0x440 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_intent_layout+0x13d0/0x13d0 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] tgt_enqueue+0xd0/0x300 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 5718:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9632fe348400/0xcf14ec31d84dd226 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x2ee:0x0].0x0 bits 0x13/0x0 rrc: 15 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xcf14ec31d84dd09e expref: 156 pid: 5729 timeout: 221 lvb_type: 0 Lustre: mdt00_009: service thread pid 8534 completed after 100.606s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_001: service thread pid 5729 completed after 100.075s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_014: service thread pid 12303 completed after 100.052s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_003: service thread pid 6948 completed after 100.388s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_012: service thread pid 9806 completed after 100.320s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_011: service thread pid 8970 completed after 100.317s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: mdt00_002: service thread pid 5730 completed after 100.124s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffff9632e783c000: operation mds_reint to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9632e783c000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: mdt00_006: service thread pid 8005 completed after 100.462s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffff9632e783c000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 19457:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9632e783c000: inode [0x200000401:0x2dc:0x0] mdc close failed: rc = -108 Lustre: mdt00_000: service thread pid 5728 completed after 99.971s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: 19057:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 19457:0:(ldlm_resource.c:1097:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff9632e783c000: namespace resource [0x200000007:0x1:0x0].0x0 (ffff96335ac45700) refcount nonzero (4) after lock cleanup; forcing cleanup. LustreError: 19057:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 39 previous similar messages Lustre: mdt00_017: service thread pid 15460 completed after 99.961s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). Lustre: lustre-MDT0000-mdc-ffff9632e783c000: Connection restored to (at 0@lo) Lustre: 12011:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 12011:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 5 previous similar messages Lustre: 12011:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 12011:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 12011:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 12011:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 12011:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 12011:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 12011:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 12011:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 5 previous similar messages Lustre: 12011:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 12011:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 5 previous similar messages 4[22391]: segfault at 8 ip 00007fc0023a6875 sp 00007ffedaa6c780 error 4 in ld-2.28.so[7fc002385000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 12[24533]: segfault at 8 ip 00007f76ebfd7875 sp 00007ffc9b7dfe00 error 4 in ld-2.28.so[7f76ebfb6000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 8596:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0001: opcode 7: before 516 < left 528, rollback = 7 Lustre: 8596:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 15 previous similar messages Lustre: 8596:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8596:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: 8596:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 8596:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: 8596:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/528/0, punch: 0/0/0, quota 1/3/0 Lustre: 8596:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: 8596:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8596:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: 8596:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8596:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 15 previous similar messages Lustre: lustre-OST0001-osc-ffff963298c9a000: disconnect after 21s idle LustreError: 5718:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff96332065d200/0xcf14ec31d85636cb lrc: 3/0,0 mode: PR/PR res: [0x200000402:0x78b:0x0].0x0 bits 0x1b/0x0 rrc: 11 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xcf14ec31d8563287 expref: 207 pid: 8534 timeout: 359 lvb_type: 0 LustreError: 8032:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 000000004255a897 ns: mdt-lustre-MDT0000_UUID lock: ffff96330182b400/0xcf14ec31d85649b0 lrc: 3/0,0 mode: PR/PR res: [0x200000402:0x78b:0x0].0x0 bits 0x1b/0x0 rrc: 9 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xcf14ec31d8564932 expref: 14 pid: 8032 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff9632e783c000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff9632e783c000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 1 previous similar message LustreError: 7192:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1752564921 with bad export cookie 14921811164211837787 LustreError: lustre-MDT0000-mdc-ffff9632e783c000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 31317:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000402:0x78b:0x0] error: rc = -5 LustreError: 30932:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9632e783c000: inode [0x200000402:0x78b:0x0] mdc close failed: rc = -5 LustreError: 30925:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000402:0x78b:0x0] error -108. LustreError: 31317:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 32 previous similar messages LustreError: 30932:0:(file.c:248:ll_close_inode_openhandle()) Skipped 5 previous similar messages LustreError: 31063:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 31412:0:(ldlm_resource.c:1097:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff9632e783c000: namespace resource [0x200000401:0x1:0x0].0x0 (ffff9632fe7fa900) refcount nonzero (3) after lock cleanup; forcing cleanup. LustreError: 31412:0:(ldlm_resource.c:1097:ldlm_resource_complain()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff9632e783c000: Connection restored to (at 0@lo) Lustre: 12011:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 612, rollback = 7 Lustre: 12011:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 11 previous similar messages Lustre: 12011:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 12011:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 12011:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 12011:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 12011:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/612/0, punch: 0/0/0, quota 1/3/0 Lustre: 12011:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 12011:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 12011:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 11 previous similar messages Lustre: 12011:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 12011:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 11 previous similar messages 1[32823]: segfault at 8 ip 00007f0a3a0ad875 sp 00007ffc991d2210 error 4 in ld-2.28.so[7f0a3a08c000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 9[34228]: segfault at 556ea28e4000 ip 0000556ea28e4000 sp 00007ffdd0766f50 error 14 in 9[556ea2ae4000+1000] Code: Unable to access opcode bytes at RIP 0x556ea28e3fd6. Lustre: 24082:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x87:0x0] with magic=0xbd60bd0 Lustre: 24082:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 2[38075]: segfault at 8 ip 00007f071c36b875 sp 00007ffc9603dc70 error 4 in ld-2.28.so[7f071c34a000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 38694:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff9632e783c000: inode [0x200000402:0x909:0x0] mdc close failed: rc = -13 LustreError: 38694:0:(file.c:248:ll_close_inode_openhandle()) Skipped 2 previous similar messages 18[39617]: segfault at 8 ip 00007f53e0d0a875 sp 00007ffcdb9fba70 error 4 in ld-2.28.so[7f53e0ce9000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 8200:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0xbf2:0x0] with magic=0xbd60bd0 Lustre: 8200:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: 9806:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000402:0xdb9:0x0] with magic=0xbd60bd0 Lustre: 9806:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message Lustre: 8428:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0000: opcode 7: before 515 < left 618, rollback = 7 Lustre: 8428:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 63 previous similar messages Lustre: 8428:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 8428:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 63 previous similar messages Lustre: 8428:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/1, xattr_set: 2/15/0 Lustre: 8428:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 63 previous similar messages Lustre: 8428:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 8428:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 63 previous similar messages Lustre: 8428:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 8428:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 63 previous similar messages Lustre: 8428:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 8428:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 63 previous similar messages LustreError: 54882:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff963298c9a000: inode [0x200000402:0xf34:0x0] mdc close failed: rc = -13 15[56341]: segfault at 8 ip 00007f9d55ef3875 sp 00007fffffe768b0 error 4 in ld-2.28.so[7f9d55ed2000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 17[57466]: segfault at 0 ip 0000563e1dd87b47 sp 00007ffc0e105ef0 error 6 in 17[563e1dd83000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 8[61142]: segfault at 8 ip 00007f11653da875 sp 00007fffea46ee80 error 4 in ld-2.28.so[7f11653b9000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 5718:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 104s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9632f220ba00/0xcf14ec31d86be15d lrc: 3/0,0 mode: PR/PR res: [0x200000402:0x11c5:0x0].0x0 bits 0x13/0x0 rrc: 6 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xcf14ec31d86be141 expref: 536 pid: 9807 timeout: 558 lvb_type: 0 LustreError: 5713:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1752565121 with bad export cookie 14921811164211326164 LustreError: 5713:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff963298c9a000: operation mds_getxattr to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff963298c9a000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 2 previous similar messages LustreError: lustre-MDT0000-mdc-ffff963298c9a000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 5742:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff9632fa6de580 x1837697210531456/t0(0) o104->lustre-MDT0000@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: 62609:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff963298c9a000: inode [0x200000402:0x11ae:0x0] mdc close failed: rc = -108 LustreError: 62512:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff963298c9a000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 62512:0:(mdc_request.c:1477:mdc_read_page()) Skipped 4 previous similar messages LustreError: 62609:0:(ldlm_resource.c:1097:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff963298c9a000: namespace resource [0x200000404:0x29f:0x0].0x0 (ffff9632ff6a9600) refcount nonzero (1) after lock cleanup; forcing cleanup. LustreError: 62621:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 62609:0:(ldlm_resource.c:1097:ldlm_resource_complain()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff963298c9a000: Connection restored to (at 0@lo) LustreError: 62621:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 17 previous similar messages 14[62619]: segfault at 8 ip 00007f20c41ab875 sp 00007fff47e27f40 error 4 in ld-2.28.so[7f20c418a000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 13866:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 13866:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 23 previous similar messages Lustre: 13866:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 13866:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 13866:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 13866:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 13866:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 13866:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 13866:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 13866:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 23 previous similar messages Lustre: 13866:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 13866:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 23 previous similar messages LustreError: 64551:0:(statahead.c:1600:ll_statahead_thread()) lustre: ll_sa_64052 LIST => FNAME no wakeup. LustreError: 5718:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9633294d9200/0xcf14ec31d86e6525 lrc: 3/0,0 mode: PR/PR res: [0x200000405:0xed:0x0].0x0 bits 0x1b/0x0 rrc: 12 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xcf14ec31d86e6502 expref: 78 pid: 5729 timeout: 673 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff963298c9a000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff963298c9a000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff963298c9a000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 65965:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000405:0xed:0x0] error -5. LustreError: 66390:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff963298c9a000: inode [0x200000401:0x1:0x0] mdc close failed: rc = -108 LustreError: 65965:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message LustreError: 66390:0:(file.c:248:ll_close_inode_openhandle()) Skipped 6 previous similar messages LustreError: 66390:0:(ldlm_resource.c:1097:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff963298c9a000: namespace resource [0x200000401:0x1:0x0].0x0 (ffff9632f53c7f00) refcount nonzero (2) after lock cleanup; forcing cleanup. LustreError: 66087:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 66087:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff963298c9a000: Connection restored to (at 0@lo) Lustre: 24088:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000406:0x34:0x0] with magic=0xbd60bd0 Lustre: 24088:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages traps: 14[70133] general protection fault ip:55f0d22f3c35 sp:7ffd0ad16048 error:0 in 14[55f0d22ee000+7000] Lustre: 8200:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0xdb7:0x0] with magic=0xbd60bd0 Lustre: 8200:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 3 previous similar messages LustreError: 5718:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9632f0e06600/0xcf14ec31d87bb22e lrc: 3/0,0 mode: CR/CR res: [0x200000406:0x588:0x0].0x0 bits 0xa/0x0 rrc: 9 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xcf14ec31d87baffe expref: 258 pid: 8534 timeout: 845 lvb_type: 0 LustreError: 6948:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 00000000957ef457 ns: mdt-lustre-MDT0000_UUID lock: ffff963318806800/0xcf14ec31d87bb664 lrc: 3/0,0 mode: PR/PR res: [0x200000406:0x588:0x0].0x0 bits 0x20/0x0 rrc: 4 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xcf14ec31d87bb648 expref: 5 pid: 6948 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff963298c9a000: operation mds_reint to node 0@lo failed: rc = -107 LustreError: 6948:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) Skipped 5 previous similar messages Lustre: lustre-MDT0000-mdc-ffff963298c9a000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 1 previous similar message LustreError: lustre-MDT0000-mdc-ffff963298c9a000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 84837:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -5 LustreError: 85022:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff963298c9a000: inode [0x200000406:0x581:0x0] mdc close failed: rc = -108 LustreError: 84857:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000406:0x588:0x0] error: rc = -108 LustreError: 84857:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 4 previous similar messages LustreError: 84837:0:(llite_lib.c:2039:ll_md_setattr()) Skipped 1 previous similar message LustreError: 84762:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000406:0x588:0x0] error -108. LustreError: 85022:0:(file.c:248:ll_close_inode_openhandle()) Skipped 5 previous similar messages LustreError: 84762:0:(vvp_io.c:1909:vvp_io_init()) Skipped 1 previous similar message Lustre: lustre-MDT0000-mdc-ffff963298c9a000: Connection restored to (at 0@lo) Lustre: 13866:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0003: opcode 7: before 516 < left 618, rollback = 7 Lustre: 13866:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 57 previous similar messages Lustre: 13866:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 13866:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 57 previous similar messages Lustre: 13866:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 13866:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 57 previous similar messages Lustre: 13866:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 1/3/0 Lustre: 13866:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 57 previous similar messages Lustre: 13866:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 13866:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 57 previous similar messages Lustre: 13866:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 13866:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 57 previous similar messages 4[95739]: segfault at 8 ip 00007fe8bc3cd875 sp 00007ffdc9c69470 error 4 in ld-2.28.so[7fe8bc3ac000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 2[102643]: segfault at 8 ip 00007f5839d72875 sp 00007ffe41959ef0 error 4 in ld-2.28.so[7f5839d51000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 42880:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x200000404:0x1892:0x0] with magic=0xbd60bd0 Lustre: 42880:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 2[108699]: segfault at 1 ip 000055ff0ba8e950 sp 00007ffdaa17aab8 error 6 in 2[55ff0ba8a000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 4[114964]: segfault at 8 ip 00007f4ca18d7875 sp 00007ffe3ea8ee70 error 4 in ld-2.28.so[7f4ca18b6000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: lustre-OST0000-osc-ffff9632e783c000: disconnect after 24s idle Lustre: lustre-OST0000-osc-ffff963298c9a000: disconnect after 21s idle Lustre: lustre-OST0003-osc-ffff963298c9a000: disconnect after 22s idle LustreError: 5718:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 103s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff963294c26e00/0xcf14ec31d8939d7c lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x1d5c:0x0].0x0 bits 0x13/0x0 rrc: 6 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xcf14ec31d8939d3d expref: 409 pid: 5728 timeout: 1083 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff963298c9a000: operation mds_getxattr to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff963298c9a000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff963298c9a000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 120106:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000404:0x1d5c:0x0] error: rc = -108 LustreError: 120106:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 9 previous similar messages LustreError: 120418:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff963298c9a000: inode [0x200000404:0x1b7c:0x0] mdc close failed: rc = -108 LustreError: 120321:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff963298c9a000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 120418:0:(file.c:248:ll_close_inode_openhandle()) Skipped 9 previous similar messages LustreError: 120321:0:(mdc_request.c:1477:mdc_read_page()) Skipped 7 previous similar messages LustreError: 120418:0:(ldlm_resource.c:1097:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff963298c9a000: namespace resource [0x200000407:0x85e:0x0].0x0 (ffff96328946f700) refcount nonzero (1) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff963298c9a000: Connection restored to (at 0@lo) Lustre: lustre-OST0002-osc-ffff963298c9a000: disconnect after 23s idle LustreError: 5718:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff963318a5f200/0xcf14ec31d89e2f9d lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x468:0x0].0x0 bits 0x1b/0x0 rrc: 11 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xcf14ec31d89e2f57 expref: 194 pid: 7981 timeout: 1232 lvb_type: 0 LustreError: 24088:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 00000000e02466e1 ns: mdt-lustre-MDT0000_UUID lock: ffff963294df1400/0xcf14ec31d89e40de lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x468:0x0].0x0 bits 0x1b/0x0 rrc: 9 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xcf14ec31d89e40d0 expref: 50 pid: 24088 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff963298c9a000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff963298c9a000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: Skipped 5 previous similar messages LustreError: 9806:0:(client.c:1375:ptlrpc_import_delay_req()) @@@ IMP_CLOSED req@ffff963301ad2a00 x1837697264126720/t0(0) o104->lustre-MDT0000@0@lo:15/16 lens 328/224 e 0 to 0 dl 0 ref 1 fl Rpc:QU/0/ffffffff rc 0/-1 job:'' uid:4294967295 gid:4294967295 projid:4294967295 LustreError: lustre-MDT0000-mdc-ffff963298c9a000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 134962:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000408:0x468:0x0] error: rc = -5 LustreError: 134962:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 4 previous similar messages LustreError: 134671:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000408:0x468:0x0] error -108. LustreError: 134671:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff963298c9a000: inode [0x200000408:0x468:0x0] mdc close failed: rc = -108 LustreError: 135029:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 134671:0:(file.c:248:ll_close_inode_openhandle()) Skipped 2 previous similar messages LustreError: 135121:0:(mdc_request.c:1477:mdc_read_page()) lustre-MDT0000-mdc-ffff963298c9a000: [0x200000401:0x1:0x0] lock enqueue fails: rc = -108 LustreError: 135029:0:(llite_lib.c:2039:ll_md_setattr()) Skipped 2 previous similar messages LustreError: 135121:0:(mdc_request.c:1477:mdc_read_page()) Skipped 18 previous similar messages LustreError: 135167:0:(ldlm_resource.c:1097:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff963298c9a000: namespace resource [0x200000404:0x1dce:0x0].0x0 (ffff9632a7b11300) refcount nonzero (1) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff963298c9a000: Connection restored to (at 0@lo) Lustre: lustre-OST0000: already connected client lustre-MDT0000-mdtlov_UUID (at 0@lo) with handle 0xcf14ec31d8462ace. Rejecting client with the same UUID trying to reconnect with handle 0x4b15c8d8c1a38fe3 Lustre: lustre-OST0000: already connected client lustre-MDT0000-mdtlov_UUID (at 0@lo) with handle 0xcf14ec31d8462ace. Rejecting client with the same UUID trying to reconnect with handle 0x4b15c8d8c1a38fe3 LustreError: 5718:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 101s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff963294025e00/0xcf14ec31d89ea8b8 lrc: 3/0,0 mode: PR/PR res: [0x200000404:0x22b4:0x0].0x0 bits 0x1b/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xcf14ec31d89ea22f expref: 610 pid: 8200 timeout: 1335 lvb_type: 0 LustreError: 6948:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 00000000d21526ce ns: mdt-lustre-MDT0000_UUID lock: ffff9632f3c41800/0xcf14ec31d89eb2dd lrc: 3/0,0 mode: PR/PR res: [0x200000408:0x74:0x0].0x0 bits 0x13/0x0 rrc: 3 type: IBT gid 0 flags: 0x50200000000000 nid: 0@lo remote: 0xcf14ec31d89eb2c1 expref: 6 pid: 6948 timeout: 0 lvb_type: 0 LustreError: lustre-MDT0000-mdc-ffff9632e783c000: operation mds_getxattr to node 0@lo failed: rc = -107 LustreError: 8427:0:(ldlm_lockd.c:2550:ldlm_cancel_handler()) ldlm_cancel from 0@lo arrived at 1752565896 with bad export cookie 14921811164212393202 Lustre: lustre-MDT0000-mdc-ffff9632e783c000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff9632e783c000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 6948:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) Skipped 5 previous similar messages LustreError: Skipped 3 previous similar messages LustreError: 135655:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 135254:0:(vvp_io.c:1909:vvp_io_init()) lustre: refresh file layout [0x200000404:0x22b4:0x0] error -5. LustreError: 135254:0:(vvp_io.c:1909:vvp_io_init()) Skipped 2 previous similar messages LustreError: 135005:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000007:0x1:0x0] error: rc = -108 LustreError: 135005:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 6 previous similar messages Lustre: lustre-MDT0000-mdc-ffff9632e783c000: Connection restored to (at 0@lo) 2[139844]: segfault at 8 ip 00007f06f91c4875 sp 00007fff30b2cf00 error 4 in ld-2.28.so[7f06f91a3000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 3[139990]: segfault at 8 ip 00007f99e98ac875 sp 00007ffe96faf2f0 error 4 in ld-2.28.so[7f99e988b000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 6009:0:(osd_internal.h:1335:osd_trans_exec_op()) lustre-OST0002: opcode 7: before 516 < left 618, rollback = 7 Lustre: 6009:0:(osd_internal.h:1335:osd_trans_exec_op()) Skipped 153 previous similar messages Lustre: 6009:0:(osd_handler.c:1966:osd_trans_dump_creds()) create: 0/0/0, destroy: 0/0/0 Lustre: 6009:0:(osd_handler.c:1966:osd_trans_dump_creds()) Skipped 153 previous similar messages Lustre: 6009:0:(osd_handler.c:1973:osd_trans_dump_creds()) attr_set: 1/1/0, xattr_set: 2/15/0 Lustre: 6009:0:(osd_handler.c:1973:osd_trans_dump_creds()) Skipped 153 previous similar messages Lustre: 6009:0:(osd_handler.c:1983:osd_trans_dump_creds()) write: 2/618/0, punch: 0/0/0, quota 4/150/0 Lustre: 6009:0:(osd_handler.c:1983:osd_trans_dump_creds()) Skipped 153 previous similar messages Lustre: 6009:0:(osd_handler.c:1990:osd_trans_dump_creds()) insert: 0/0/0, delete: 0/0/0 Lustre: 6009:0:(osd_handler.c:1990:osd_trans_dump_creds()) Skipped 153 previous similar messages Lustre: 6009:0:(osd_handler.c:1997:osd_trans_dump_creds()) ref_add: 0/0/0, ref_del: 0/0/0 Lustre: 6009:0:(osd_handler.c:1997:osd_trans_dump_creds()) Skipped 153 previous similar messages ptlrpc_watchdog_fire: 6 callbacks suppressed Lustre: ll_ost_out00_00: service thread pid 32231 was inactive for 40.683 seconds. The thread might be hung, or it might only be slow and will resume later. Dumping the stack trace for debugging purposes: task:mdt_io00_004 state:I Lustre: Skipped 1 previous similar message task:mdt_io00_003 state:I stack:0 pid:9175 ppid:2 flags:0x80004080 Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? cfs_trace_unlock_tcd+0x28/0xa0 [libcfs] ? libcfs_debug_msg+0xcf4/0x1200 [libcfs] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_object_pdo_lock+0x409/0x910 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_parent_lock+0x8f/0x370 [mdt] mdt_lock_two_dirs+0x31/0x210 [mdt] mdt_reint_rename+0x1260/0x34e0 [mdt] ? sptlrpc_svc_alloc_rs+0x70/0x460 [ptlrpc] ? lustre_pack_reply_v2+0x210/0x380 [ptlrpc] ? mdt_ucred+0x19/0x30 [mdt] ? ucred_set_audit_enabled.isra.12+0x10/0xa0 [mdt] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 stack:0 pid:110656 ppid:2 flags:0x80004080 task:ll_ost_out00_00 state:I stack:0 pid:32231 ppid:2 flags:0x80004080 Call Trace: Call Trace: __schedule+0x351/0xcb0 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 __schedule+0x351/0xcb0 ? __next_timer_interrupt+0x160/0x160 schedule+0xc0/0x180 schedule_timeout+0xb4/0x190 ? __next_timer_interrupt+0x160/0x160 ? do_raw_spin_unlock+0x75/0x190 ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? do_raw_spin_unlock+0x75/0x190 ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ldlm_completion_ast+0xbfe/0x1280 [ptlrpc] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? woken_wake_function+0x30/0x30 ldlm_cli_enqueue_local+0x60b/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? do_raw_spin_unlock+0x75/0x190 mdt_object_lock_internal+0x20b/0x5a0 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_object_pdo_lock+0x535/0x910 [mdt] mdt_object_lock+0x9e/0x240 [mdt] ? mdt_obd_postrecov+0x100/0x100 [mdt] mdt_rename_source_lock+0x6b/0x180 [mdt] ? ldlm_cli_enqueue_local+0xc40/0xc40 [ptlrpc] mdt_reint_rename+0xd38/0x34e0 [mdt] mdt_parent_lock+0x8f/0x370 [mdt] ? sptlrpc_svc_alloc_rs+0x70/0x460 [ptlrpc] ? mdt_name_unpack+0xc6/0x140 [mdt] ? lustre_pack_reply_v2+0x210/0x380 [ptlrpc] ? lu_name_is_valid_len+0x5e/0x80 [mdt] ? mdt_ucred+0x19/0x30 [mdt] mdt_getattr_name_lock+0x2787/0x3350 [mdt] ? ucred_set_audit_enabled.isra.12+0x10/0xa0 [mdt] mdt_reint_rec+0x139/0x2b0 [mdt] mdt_reint_internal+0x6a0/0xdc0 [mdt] mdt_reint+0x163/0x190 [mdt] tgt_handle_request0+0x137/0xaf0 [ptlrpc] mdt_intent_getattr+0x2e2/0x630 [mdt] mdt_intent_opc.constprop.43+0x153/0xfb0 [mdt] ? mdt_getattr_name_lock+0x3350/0x3350 [mdt] mdt_intent_policy+0x14b/0x670 [mdt] ldlm_lock_enqueue+0x43c/0xcd0 [ptlrpc] ? _raw_read_unlock+0x12/0x30 ? cfs_hash_rw_unlock+0x11/0x30 [obdclass] ldlm_handle_enqueue+0x43f/0x2320 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lustre_msg_buf+0x1b/0x70 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 ? __req_capsule_get+0x44e/0xa50 [ptlrpc] ? lustre_swab_ldlm_lock_desc+0x90/0x90 [ptlrpc] mdt_batch_getattr+0xf6/0x1f0 [mdt] mdt_batch+0x7ee/0x20a9 [mdt] ? lustre_msg_get_last_committed+0x110/0x110 [ptlrpc] tgt_handle_request0+0x137/0xaf0 [ptlrpc] tgt_request_handle+0x351/0x1c00 [ptlrpc] ptlrpc_server_handle_request+0x443/0x13b0 [ptlrpc] ? lprocfs_counter_add+0x15b/0x210 [obdclass] ptlrpc_main+0xd30/0x1450 [ptlrpc] ? ptlrpc_wait_event+0x980/0x980 [ptlrpc] kthread+0x1d1/0x200 ? set_kthread_struct+0x70/0x70 ret_from_fork+0x1f/0x30 LustreError: 5718:0:(ldlm_lockd.c:257:expired_lock_main()) ### lock callback timer expired after 102s: evicting client at 0@lo ns: mdt-lustre-MDT0000_UUID lock: ffff9633307bea00/0xcf14ec31d8a5e407 lrc: 3/0,0 mode: PR/PR res: [0x200000409:0x2dc:0x0].0x0 bits 0x13/0x0 rrc: 4 type: IBT gid 0 flags: 0x60200400000020 nid: 0@lo remote: 0xcf14ec31d8a5e382 expref: 189 pid: 32231 timeout: 1473 lvb_type: 0 LustreError: 24088:0:(ldlm_lockd.c:1447:ldlm_handle_enqueue()) ### lock on destroyed export 00000000616b2634 ns: mdt-lustre-MDT0000_UUID lock: ffff9632fc7f9a00/0xcf14ec31d8a5e4e7 lrc: 3/0,0 mode: PR/PR res: [0x200000401:0x1:0x0].0x0 bits 0x13/0x0 rrc: 15 type: IBT gid 0 flags: 0x50200400000020 nid: 0@lo remote: 0xcf14ec31d8a5e4a1 expref: 111 pid: 24088 timeout: 0 lvb_type: 0 Lustre: mdt_io00_004: service thread pid 110656 completed after 102.128s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffff963298c9a000: operation ldlm_enqueue to node 0@lo failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff963298c9a000: Connection to lustre-MDT0000 (at 0@lo) was lost; in progress operations using this service will wait for recovery to complete Lustre: ll_ost_out00_00: service thread pid 32231 completed after 102.138s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: lustre-MDT0000-mdc-ffff963298c9a000: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. Lustre: mdt_io00_003: service thread pid 9175 completed after 102.104s. This likely indicates the system was overloaded (too many service threads, or not enough hardware resources). LustreError: Skipped 4 previous similar messages LustreError: 145905:0:(file.c:6202:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000401:0x1:0x0] error: rc = -107 LustreError: 145905:0:(file.c:6202:ll_inode_revalidate_fini()) Skipped 25 previous similar messages LustreError: 145674:0:(llite_lib.c:2039:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 145971:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffff963298c9a000: inode [0x200000409:0x2a0:0x0] mdc close failed: rc = -108 LustreError: 145971:0:(file.c:248:ll_close_inode_openhandle()) Skipped 17 previous similar messages LustreError: 145971:0:(ldlm_resource.c:1097:ldlm_resource_complain()) lustre-MDT0000-mdc-ffff963298c9a000: namespace resource [0x200000409:0x1b9:0x0].0x0 (ffff9632945dfe00) refcount nonzero (2) after lock cleanup; forcing cleanup. Lustre: lustre-MDT0000-mdc-ffff963298c9a000: Connection restored to (at 0@lo) 1[150535]: segfault at 0 ip 000055a15f3e8b47 sp 00007ffcaae49210 error 6 in 1[55a15f3e4000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 traps: 8[161687] general protection fault ip:560fdbc1e28d sp:7ffdda194fc8 error:0 in 8[560fdbc19000+7000] 8[162141]: segfault at 8 ip 00007f409893a875 sp 00007ffc59621370 error 4 in ld-2.28.so[7f4098919000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: 9806:0:(lod_lov.c:1402:lod_parse_striping()) lustre-MDT0000-mdtlov: EXTENSION flags=40 set on component[2]=1 of non-SEL file [0x20000040b:0x547:0x0] with magic=0xbd60bd0 Lustre: 9806:0:(lod_lov.c:1402:lod_parse_striping()) Skipped 1 previous similar message 13[167083]: segfault at 8 ip 00007f693b5fa875 sp 00007ffc084f3e70 error 4 in ld-2.28.so[7f693b5d9000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 13[172313]: segfault at 8 ip 00007f732861c875 sp 00007fff814a94b0 error 4 in ld-2.28.so[7f73285fb000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 8[172411]: segfault at 8 ip 00007f0b227f8875 sp 00007fffd0859c20 error 4 in ld-2.28.so[7f0b227d7000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 18[174958]: segfault at 0 ip 000055ad04eb0a88 sp 00007fff16585620 error 6 in 18[55ad04eaf000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 18[177114]: segfault at 8 ip 00007f45a9410875 sp 00007fffb10af420 error 4 in ld-2.28.so[7f45a93ef000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 24 f6 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 | Link to test |
racer test 2: racer rename: onyx-137vm7.onyx.whamcloud.com,onyx-137vm8 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 1156088 Comm: ll_sa_1155933 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.53.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a e3 4b 5b c3 cc cc cc cc 48 89 df e8 25 09 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffaead48c57e08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100004 RDX: 0000000000100005 RSI: ffff88f1089e8370 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff88f00c40b000 R11: 0000000000000100 R12: ffff88f1089e8090 R13: ffff88f00c40b098 R14: ffff88f00c40b000 R15: ffff88f00c40b0a8 FS: 0000000000000000(0000) GS:ffff88f13bd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000011fa10006 CR4: 00000000003706e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev pcspkr virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net net_failover failover serio_raw virtio_blk [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x200000bea:0x1825:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 2 previous similar messages Lustre: dir [0x2c0000beb:0x1d06:0x0] stripe 0 readdir failed: -2, directory is partially accessed! | Link to test |
racer test 2: racer rename: trevis-131vm7.trevis.whamcloud.com,trevis-91vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 681433 Comm: ll_sa_681337 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.50.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 23 59 5b c3 cc cc cc cc 48 89 df e8 25 09 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffff9a8dc2e33e08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000000000b RDX: ffff89e07fd38160 RSI: ffff89dfe2461b70 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff89dfc1003180 R09: ffffd26e0191dd80 R10: 0000000000000000 R11: 000000000000000f R12: ffff89dfe2461890 R13: ffff89e00cdc5098 R14: ffff89e00cdc5000 R15: ffff89e00cdc50a8 FS: 0000000000000000(0000) GS:ffff89e07fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000070410005 CR4: 00000000000606e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel virtio_balloon pcspkr joydev i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net virtio_blk net_failover failover serio_raw [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 676075:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x2c0000be6:0x6d7:0x0]: rc = -5 LustreError: 676075:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 208 previous similar messages LustreError: 676075:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 676075:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 208 previous similar messages LustreError: 143130:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 7 [0x240000be7:0x66b:0x0] inode@0000000000000000: rc = -5 | Link to test |
racer test 2: racer rename: onyx-152vm1.onyx.whamcloud.com,onyx-152vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 690103 Comm: ll_sa_690003 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.53.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 03 79 5b c3 cc cc cc cc 48 89 df e8 25 09 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb94b44a97e08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000010000c RDX: 000000000010000d RSI: ffff91606eee4570 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff915f4a41aa00 R11: 0000000000000000 R12: ffff91606eee4290 R13: ffff915f4a41aa98 R14: ffff915f4a41aa00 R15: ffff915f4a41aaa8 FS: 0000000000000000(0000) GS:ffff91607bc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000126010002 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl rdma_ucm rdma_cm iw_cm ib_cm lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace intel_rapl_msr fscache intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev mlx5_ib pcspkr ib_uverbs ib_core virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic mlx5_core ata_piix libata mlxfw pci_hyperv_intf tls virtio_net crc32c_intel virtio_blk net_failover serio_raw failover psample [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c | Link to test |
racer test 2: racer rename: onyx-80vm1.onyx.whamcloud.com,onyx-80vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 620613 Comm: ll_sa_620472 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.53.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 23 69 5b c3 cc cc cc cc 48 89 df e8 25 09 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffab60825cfe08 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000010000b RDX: 000000000010000c RSI: ffff97e0dc9e1b70 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff97e078d72a00 R11: 0000000000000000 R12: ffff97e0dc9e1890 R13: ffff97e078d72a98 R14: ffff97e078d72a00 R15: ffff97e078d72aa8 FS: 0000000000000000(0000) GS:ffff97e0ffd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000092610006 CR4: 00000000001706e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs intel_rapl_msr lockd intel_rapl_common grace fscache crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr joydev virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net serio_raw virtio_blk net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 142340:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 19 [0x200000bec:0x1a04:0x0] inode@0000000000000000: rc = -5 LustreError: 142340:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 1 previous similar message Lustre: dir [0x280000be6:0x221c:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 12 previous similar messages LustreError: 462106:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffff97e071926800: cannot apply new layout on [0x200000bec:0x1a04:0x0] : rc = -5 LustreError: 462106:0:(lov_object.c:1350:lov_layout_change()) Skipped 28 previous similar messages Lustre: dir [0x200000bea:0x1a14:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Lustre: dir [0x240000beb:0x3504:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Autotest: Test running for 215 minutes (lustre-reviews_review-dne-part-9_114440.34) Lustre: dir [0x240000bea:0x2cc3:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 4 previous similar messages | Link to test |
racer test 1: racer on clients: onyx-96vm1.onyx.whamcloud.com,onyx-96vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 235276 Comm: ll_sa_234986 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.51.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 83 4f 5b c3 cc cc cc cc 48 89 df e8 25 09 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb7230116be08 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000010000b RDX: 000000000010000c RSI: ffffa030c84a4570 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffffa03089915000 R11: 0000000000000000 R12: ffffa030c84a4290 R13: ffffa03089915098 R14: ffffa03089915000 R15: ffffa030899150a8 FS: 0000000000000000(0000) GS:ffffa030ffc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000044e10004 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev pcspkr virtio_balloon i2c_piix4 sunrpc ext4 ata_generic mbcache jbd2 ata_piix libata crc32c_intel serio_raw virtio_net virtio_blk net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA LustreError: 157292:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000bec:0x73:0x0]: rc = -5 LustreError: 157292:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 156300:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000bec:0x73:0x0]: rc = -5 LustreError: 156300:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 5[159228]: segfault at 0 ip 000055729d009b48 sp 00007ffd626363e0 error 6 in 5[55729d004000+7000] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <20> 33 20 6f 72 20 6c 61 74 65 72 20 3c 68 74 74 70 73 3a 2f 2f 67 LustreError: 158989:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffffa03087a33000: inode [0x200000bec:0x7:0x0] mdc close failed: rc = -116 Lustre: dir [0x2c0000beb:0xc:0x0] stripe 2 readdir failed: -2, directory is partially accessed! LustreError: 156957:0:(mdc_request.c:1484:mdc_read_page()) lustre-MDT0002-mdc-ffffa03087a33000: dir page locate: [0x280000bd1:0x6f:0x0] at 0: rc -5 Lustre: Skipped 3 previous similar messages LustreError: 156917:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffffa03087a33000: inode [0x200000bec:0x39:0x0] mdc close failed: rc = -2 Lustre: dir [0x280000be7:0x1c8:0x0] stripe 4 readdir failed: -2, directory is partially accessed! Lustre: Skipped 6 previous similar messages LustreError: 157425:0:(mdc_request.c:1484:mdc_read_page()) lustre-MDT0001-mdc-ffffa03087a33000: dir page locate: [0x240000beb:0x68:0x0] at 0: rc -5 LustreError: 157425:0:(mdc_request.c:1484:mdc_read_page()) Skipped 6 previous similar messages LustreError: 157074:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffffa03087a33000: inode [0x2c0000be9:0x13:0x0] mdc close failed: rc = -2 LustreError: 157074:0:(file.c:248:ll_close_inode_openhandle()) Skipped 2 previous similar messages LustreError: 164359:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffffa03044a39000: inode [0x280000be7:0x1c8:0x0] mdc close failed: rc = -2 LustreError: 164359:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message Lustre: dir [0x200000be9:0x9d:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 3 previous similar messages LustreError: 164735:0:(mdc_request.c:1484:mdc_read_page()) lustre-MDT0001-mdc-ffffa03044a39000: dir page locate: [0x240000bd1:0x4f:0x0] at 0: rc -5 LustreError: 164735:0:(mdc_request.c:1484:mdc_read_page()) Skipped 2 previous similar messages LustreError: 155220:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000be6:0x1b2:0x0]: rc = -5 LustreError: 155220:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 173795:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x2c0000be9:0x15f:0x0]: rc = -5 LustreError: 173795:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 2 previous similar messages LustreError: 173795:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 173795:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 2 previous similar messages LustreError: 174369:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffffa03044a39000: inode [0x2c0000bec:0x145:0x0] mdc close failed: rc = -2 LustreError: 174369:0:(file.c:248:ll_close_inode_openhandle()) Skipped 2 previous similar messages LustreError: 176658:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000be9:0x32f:0x0]: rc = -5 LustreError: 176658:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 9 previous similar messages LustreError: 176658:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 176658:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 9 previous similar messages 9[181257]: segfault at 8 ip 00007f77b38e8735 sp 00007ffe0cc36210 error 4 in ld-2.28.so[7f77b38c7000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 64 f7 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 181114:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffffa03087a33000: inode [0x280000be8:0x2fc:0x0] mdc close failed: rc = -116 LustreError: 181114:0:(file.c:248:ll_close_inode_openhandle()) Skipped 5 previous similar messages 7[182423]: segfault at 8 ip 00007f95814a6735 sp 00007ffe9b123c30 error 4 in ld-2.28.so[7f9581485000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 64 f7 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 182060:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x2c0000beb:0x178:0x0]: rc = -5 LustreError: 182060:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 7 previous similar messages LustreError: 182060:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 182060:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 7 previous similar messages 3[182450]: segfault at 8 ip 00007fe43c5d5735 sp 00007ffee0b8deb0 error 4 in ld-2.28.so[7fe43c5b4000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 64 f7 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 178554:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) lustre: failure inode [0x200000bec:0x28f:0x0] get parent: rc = -116 LustreError: 187262:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffffa03044a39000: cannot apply new layout on [0x280000be7:0x59f:0x0] : rc = -5 LustreError: 187262:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x280000be7:0x59f:0x0] error -5. LustreError: 191300:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffffa03044a39000: cannot apply new layout on [0x280000be7:0x59f:0x0] : rc = -5 LustreError: 191300:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000be7:0x5ea:0x0]: rc = -5 LustreError: 191300:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 59 previous similar messages LustreError: 191300:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 191300:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 59 previous similar messages Lustre: dir [0x200000be9:0x5ff:0x0] stripe 4 readdir failed: -2, directory is partially accessed! LustreError: 186969:0:(mdc_request.c:1484:mdc_read_page()) lustre-MDT0000-mdc-ffffa03044a39000: dir page locate: [0x200000beb:0x3b1:0x0] at 0: rc -5 Lustre: Skipped 53 previous similar messages LustreError: 186969:0:(mdc_request.c:1484:mdc_read_page()) Skipped 43 previous similar messages LustreError: 187749:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffffa03044a39000: inode [0x200000be9:0x2d9:0x0] mdc close failed: rc = -2 LustreError: 187749:0:(file.c:248:ll_close_inode_openhandle()) Skipped 1 previous similar message LustreError: 143278:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 16 [0x0:0x0:0x0] inode@0000000000000000: rc = -5 LustreError: 191150:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffffa03044a39000: cannot apply new layout on [0x240000beb:0x3e9:0x0] : rc = -5 LustreError: 191150:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x240000beb:0x3e9:0x0] error -5. 1[201006]: segfault at 8 ip 00007f92245c7735 sp 00007fffc0e3c160 error 4 in ld-2.28.so[7f92245a6000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 64 f7 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 0[201010]: segfault at 8 ip 00007f6aa9b78735 sp 00007ffdde974f40 error 4 in ld-2.28.so[7f6aa9b57000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 64 f7 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 13[200681]: segfault at 8 ip 00007f38b6c0a735 sp 00007fffb941ed50 error 4 in ld-2.28.so[7f38b6be9000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 64 f7 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 1[202611]: segfault at 8 ip 00007f2644da9735 sp 00007ffcb6190b50 error 4 in ld-2.28.so[7f2644d88000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 64 f7 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 202950:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) lustre: failure inode [0x200000be9:0x5ff:0x0] get parent: rc = -2 LustreError: 202950:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) Skipped 19 previous similar messages LustreError: 204514:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffffa03044a39000: cannot apply new layout on [0x240000beb:0x3e9:0x0] : rc = -5 14[205933]: segfault at 8 ip 00007f81b62d2735 sp 00007ffef91e6d40 error 4 in ld-2.28.so[7f81b62b1000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 64 f7 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: dir [0x2c0000beb:0x4a4:0x0] stripe 1 readdir failed: -2, directory is partially accessed! LustreError: 195129:0:(mdc_request.c:1484:mdc_read_page()) lustre-MDT0000-mdc-ffffa03044a39000: dir page locate: [0x200000bd0:0x7b:0x0] at 0: rc -5 LustreError: 207297:0:(lov_object.c:1350:lov_layout_change()) lustre-clilov-ffffa03044a39000: cannot apply new layout on [0x200000beb:0x91b:0x0] : rc = -5 LustreError: 207297:0:(vvp_io.c:1905:vvp_io_init()) lustre: refresh file layout [0x200000beb:0x91b:0x0] error -5. 6[213900]: segfault at 8 ip 00007fcf3eefd735 sp 00007ffee0c91590 error 4 in ld-2.28.so[7fcf3eedc000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 64 f7 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 211900:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000be8:0x902:0x0]: rc = -5 LustreError: 211900:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 30 previous similar messages LustreError: 211900:0:(llite_lib.c:3769:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 211900:0:(llite_lib.c:3769:ll_prep_inode()) Skipped 30 previous similar messages LustreError: 215967:0:(file.c:248:ll_close_inode_openhandle()) lustre-clilmv-ffffa03044a39000: inode [0x280000be9:0xa06:0x0] mdc close failed: rc = -2 LustreError: 215967:0:(file.c:248:ll_close_inode_openhandle()) Skipped 16 previous similar messages 0[224680]: segfault at 8 ip 00007f71dd9e6735 sp 00007fffea2c8370 error 4 in ld-2.28.so[7f71dd9c5000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 64 f7 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 Lustre: dir [0x200000be9:0x9f3:0x0] stripe 3 readdir failed: -2, directory is partially accessed! Lustre: Skipped 4 previous similar messages | Link to test |
racer test 2: racer rename: onyx-83vm8.onyx.whamcloud.com,onyx-83vm9 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 869932 Comm: ll_sa_869824 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.44.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a a3 59 5b c3 cc cc cc cc 48 89 df e8 25 19 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffae6088fbfe08 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000006 RDX: ffff9c3f3fd38160 RSI: ffff9c3ef8868f70 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff9c3e81003180 R09: fffffc21c1c58980 R10: 0000000000000000 R11: 000000000000000f R12: ffff9c3ef8868c90 R13: ffff9c3ee7d27498 R14: ffff9c3ee7d27400 R15: ffff9c3ee7d274a8 FS: 0000000000000000(0000) GS:ffff9c3f3fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000067e10006 CR4: 00000000001706e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c9/0x2220 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel virtio_balloon joydev pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata virtio_net crc32c_intel serio_raw net_failover failover virtio_blk [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x200000bea:0x1bc2:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 3 previous similar messages Autotest: Test running for 270 minutes (lustre-reviews_review-dne-part-9_113836.34) Lustre: dir [0x280000be8:0x3af7:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 9 previous similar messages LustreError: 861073:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) lustre: failure inode [0x280000be8:0x3d3e:0x0] get parent: rc = -116 LustreError: 861073:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) Skipped 5 previous similar messages | Link to test |
racer test 2: racer rename: onyx-108vm4.onyx.whamcloud.com,onyx-108vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 914585 Comm: ll_sa_914399 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.50.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 03 77 5b c3 cc cc cc cc 48 89 df e8 25 09 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb7a64570fe08 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000010000b RDX: 000000000010000c RSI: ffff9129b47de970 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff912970899800 R11: 0000000000000000 R12: ffff9129b47de690 R13: ffff912970899898 R14: ffff912970899800 R15: ffff9129708998a8 FS: 0000000000000000(0000) GS:ffff9129ffc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000001e410001 CR4: 00000000001706f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev pcspkr virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel serio_raw virtio_net virtio_blk net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 726924:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff91295804c800: cannot apply new layout on [0x280000be7:0x783:0x0] : rc = -5 LustreError: 734041:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff91295804c800: cannot apply new layout on [0x2c0000be9:0x3c0:0x0] : rc = -5 LustreError: 734041:0:(lov_object.c:1348:lov_layout_change()) Skipped 1 previous similar message Lustre: dir [0x2c0000be7:0x8ac:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: dir [0x200000be9:0x153b:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 4 previous similar messages Lustre: dir [0x2c0000be8:0x18b9:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 3 previous similar messages Autotest: Test running for 230 minutes (lustre-reviews_review-dne-part-9_113737.32) Lustre: dir [0x280000beb:0x21a2:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message | Link to test |
racer test 2: racer rename: trevis-85vm4.trevis.whamcloud.com,trevis-85vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 795925 Comm: ll_sa_795812 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.50.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 03 46 5b c3 cc cc cc cc 48 89 df e8 25 09 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffa446874a7e08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000010000b RDX: 000000000010000c RSI: ffff9402caf13970 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9402df545c00 R11: 0000000000000000 R12: ffff9402caf13690 R13: ffff9402df545c98 R14: ffff9402df545c00 R15: ffff9402df545ca8 FS: 0000000000000000(0000) GS:ffff9402ffc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000009aa10002 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic virtio_net ata_piix crc32c_intel serio_raw virtio_blk libata net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 535760:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 15 [0x2c0000bea:0x19a:0x0] inode@0000000000000000: rc = -5 Lustre: dir [0x200000bea:0x1c66:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 51 previous similar messages Autotest: Test running for 210 minutes (lustre-reviews_review-dne-part-9_113647.32) Lustre: dir [0x2c0000bea:0x27bf:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 7 previous similar messages | Link to test |
racer test 2: racer rename: trevis-29vm4.trevis.whamcloud.com,trevis-29vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 648454 Comm: ll_sa_648334 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.50.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 63 5d 5b c3 cc cc cc cc 48 89 df e8 25 09 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffad8a0892fe08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100007 RDX: 0000000000100008 RSI: ffffa096201b6f70 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffffa096081fec00 R11: 0000000000000000 R12: ffffa096201b6c90 R13: ffffa096081fec98 R14: ffffa096081fec00 R15: ffffa096081feca8 FS: 0000000000000000(0000) GS:ffffa0967fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000004fc10004 CR4: 00000000000606e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel virtio_balloon pcspkr joydev i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel serio_raw virtio_blk virtio_net net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x200000bea:0xc43:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 152 previous similar messages Lustre: dir [0x200000bee:0x116b:0x0] stripe 2 readdir failed: -2, directory is partially accessed! | Link to test |
racer test 2: racer rename: onyx-142vm4.onyx.whamcloud.com,onyx-142vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 910989 Comm: ll_sa_910834 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.50.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 43 6a 5b c3 cc cc cc cc 48 89 df e8 25 09 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffae43825bbe08 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000010000d RDX: 000000000010000e RSI: ffff8d27f0116970 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff8d27c24d1e00 R11: 0000000000000000 R12: ffff8d27f0116690 R13: ffff8d27c24d1e98 R14: ffff8d27c24d1e00 R15: ffff8d27c24d1ea8 FS: 0000000000000000(0000) GS:ffff8d27fbc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000013410006 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 intel_rapl_msr intel_rapl_common dns_resolver crct10dif_pclmul nfs lockd grace fscache crc32_pclmul ghash_clmulni_intel pcspkr joydev virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel serio_raw virtio_blk virtio_net net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 864067:0:(lov_object.c:1348:lov_layout_change()) lustre-clilov-ffff8d27c374d800: cannot apply new layout on [0x2c0000be9:0xb2f:0x0] : rc = -5 LustreError: 866330:0:(lcommon_cl.c:188:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000bea:0x3fe:0x0]: rc = -5 LustreError: 866330:0:(lcommon_cl.c:188:cl_file_inode_init()) Skipped 63 previous similar messages LustreError: 866330:0:(llite_lib.c:3700:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 866330:0:(llite_lib.c:3700:ll_prep_inode()) Skipped 63 previous similar messages Lustre: dir [0x240000beb:0x781:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 20 previous similar messages Autotest: Test running for 210 minutes (lustre-reviews_review-dne-part-9_113402.11) | Link to test |
racer test 2: racer rename: onyx-31vm1.onyx.whamcloud.com,onyx-31vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 846984 Comm: ll_sa_846850 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.44.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 03 60 5b c3 cc cc cc cc 48 89 df e8 25 19 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffc156019dbe08 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008010000e RDX: 000000008010000f RSI: ffff9ddf7c21b370 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000001 R10: ffff9ddf37313400 R11: 0000000000000000 R12: ffff9ddf7c21b090 R13: ffff9ddf37313498 R14: ffff9ddf37313400 R15: ffff9ddf373134a8 FS: 0000000000000000(0000) GS:ffff9ddfbfc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000008b210003 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c9/0x2220 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata virtio_net serio_raw crc32c_intel net_failover failover virtio_blk [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x2c0000be9:0x14dd:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 13 previous similar messages Autotest: Test running for 270 minutes (lustre-reviews_review-dne-part-9_112800.11) Lustre: dir [0x280000be7:0x2a5d:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Lustre: dir [0x200000be9:0x3608:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 10 previous similar messages | Link to test |
racer test 2: racer rename: onyx-143vm1.onyx.whamcloud.com,onyx-143vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 1169952 Comm: ll_sa_1169815 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.46.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 03 5a 5b c3 cc cc cc cc 48 89 df e8 25 19 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffbaea841abe08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008010000b RDX: 000000008010000c RSI: ffff8fa0634a3370 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000001 R10: ffff8f9fa3b7f400 R11: 0000000000000000 R12: ffff8fa0634a3090 R13: ffff8f9fa3b7f498 R14: ffff8f9fa3b7f400 R15: ffff8f9fa3b7f4a8 FS: 0000000000000000(0000) GS:ffff8fa07bc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000017010006 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev i2c_piix4 pcspkr virtio_balloon sunrpc ata_generic ext4 mbcache jbd2 ata_piix libata virtio_net crc32c_intel serio_raw net_failover virtio_blk failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Autotest: Test running for 300 minutes (lustre-reviews_review-dne-part-9_112783.11) LustreError: 926186:0:(llite_lib.c:3696:ll_prep_inode()) lustre: new_inode - fatal error: rc = -2 LustreError: 926186:0:(llite_lib.c:3696:ll_prep_inode()) Skipped 2040 previous similar messages Lustre: dir [0x280000be6:0x14e9:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 76 previous similar messages Lustre: dir [0x240000be7:0x21b7:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Lustre: dir [0x200000beb:0x2e3b:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Lustre: dir [0x2c0000bea:0x40ca:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 2 previous similar messages | Link to test |
racer test 2: racer rename: onyx-147vm1.onyx.whamcloud.com,onyx-147vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 738654 Comm: ll_sa_738521 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.46.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a a3 5b 5b c3 cc cc cc cc 48 89 df e8 25 19 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffff9ca98215fe08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000010000d RDX: 000000000010000e RSI: ffff91668818b370 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9166aae0c600 R11: 0000000000000000 R12: ffff91668818b090 R13: ffff9166aae0c698 R14: ffff9166aae0c600 R15: ffff9166aae0c6a8 FS: 0000000000000000(0000) GS:ffff9166bbd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000000ae010003 CR4: 00000000003706e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev pcspkr i2c_piix4 virtio_balloon sunrpc ext4 mbcache jbd2 ata_generic crc32c_intel ata_piix serio_raw libata virtio_net net_failover failover virtio_blk [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c | Link to test |
racer test 1: racer on clients: onyx-32vm1.onyx.whamcloud.com,onyx-32vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 683020 Comm: ll_sa_464692 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.46.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a e3 52 5b c3 cc cc cc cc 48 89 df e8 25 19 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffa61d0233fe08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100007 RDX: 0000000000100008 RSI: ffff9755a8a47570 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9755cdb95000 R11: 0000000000000000 R12: ffff9755a8a47290 R13: ffff9755cdb95098 R14: ffff9755cdb95000 R15: ffff9755cdb950a8 FS: 0000000000000000(0000) GS:ffff97563fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000062e10006 CR4: 00000000000606e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x56c/0x1f60 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_interpret+0x440/0x440 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel i2c_piix4 pcspkr joydev virtio_balloon sunrpc ext4 mbcache jbd2 ata_generic crc32c_intel ata_piix libata virtio_net serio_raw virtio_blk net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA LustreError: 153422:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9755cb6e6800: inode [0x200000beb:0xf:0x0] mdc close failed: rc = -116 LustreError: 154494:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9755cb6e6800: inode [0x200000be9:0x37:0x0] mdc close failed: rc = -116 LustreError: 156503:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9755cb6e6800: inode [0x2c0000be6:0x45:0x0] mdc close failed: rc = -2 LustreError: 156852:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9755baefb800: inode [0x200000be9:0x2e:0x0] mdc close failed: rc = -2 Lustre: dir [0x200000bec:0xdd:0x0] stripe 2 readdir failed: -2, directory is partially accessed! LustreError: 159407:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff9755baefb800: cannot apply new layout on [0x240000be6:0x121:0x0] : rc = -5 LustreError: 159407:0:(vvp_io.c:1903:vvp_io_init()) lustre: refresh file layout [0x240000be6:0x121:0x0] error -5. Lustre: dir [0x280000be8:0x79:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message 2[162370]: segfault at 8 ip 00007f28cc3fd735 sp 00007ffe6fa1b0a0 error 4 in ld-2.28.so[7f28cc3dc000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 64 f7 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 158816:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9755cb6e6800: inode [0x280000be9:0x9b:0x0] mdc close failed: rc = -116 LustreError: 158816:0:(file.c:247:ll_close_inode_openhandle()) Skipped 3 previous similar messages LustreError: 163874:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff9755baefb800: cannot apply new layout on [0x240000be6:0x121:0x0] : rc = -5 LustreError: 163874:0:(lcommon_cl.c:179:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000be6:0x121:0x0]: rc = -5 LustreError: 163874:0:(llite_lib.c:3698:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 Autotest: Test running for 210 minutes (lustre-reviews_review-dne-part-9_112503.11) LustreError: 163874:0:(lcommon_cl.c:179:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000be9:0xba:0x0]: rc = -5 LustreError: 163874:0:(lcommon_cl.c:179:cl_file_inode_init()) Skipped 1 previous similar message LustreError: 163874:0:(llite_lib.c:3698:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 163874:0:(llite_lib.c:3698:ll_prep_inode()) Skipped 1 previous similar message LustreError: 163706:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9755cb6e6800: inode [0x200000be9:0x7b:0x0] mdc close failed: rc = -116 LustreError: 163706:0:(file.c:247:ll_close_inode_openhandle()) Skipped 3 previous similar messages 10[441045]: segfault at 0 ip 000055ba9ba49b47 sp 00007ffcc8262c20 error 6 in 10[55ba9ba45000+7000] Code: Unable to access opcode bytes at RIP 0x55ba9ba49b1d. LustreError: 442890:0:(lcommon_cl.c:179:cl_file_inode_init()) lustre: failed to initialize cl_object [0x2c0000be8:0x2b:0x0]: rc = -5 LustreError: 442890:0:(lcommon_cl.c:179:cl_file_inode_init()) Skipped 6 previous similar messages LustreError: 442890:0:(llite_lib.c:3698:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 442890:0:(llite_lib.c:3698:ll_prep_inode()) Skipped 6 previous similar messages LustreError: 445409:0:(lcommon_cl.c:179:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000be6:0x1f6:0x0]: rc = -5 LustreError: 445409:0:(llite_lib.c:3698:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 Lustre: dir [0x200000bed:0xb2:0x0] stripe 2 readdir failed: -2, directory is partially accessed! LustreError: 446566:0:(lcommon_cl.c:179:cl_file_inode_init()) lustre: failed to initialize cl_object [0x2c0000be8:0x2b:0x0]: rc = -5 LustreError: 446566:0:(lcommon_cl.c:179:cl_file_inode_init()) Skipped 2 previous similar messages LustreError: 446566:0:(llite_lib.c:3698:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 446566:0:(llite_lib.c:3698:ll_prep_inode()) Skipped 2 previous similar messages LustreError: 449096:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9755baefb800: inode [0x280000be8:0x1ca:0x0] mdc close failed: rc = -116 LustreError: 449096:0:(file.c:247:ll_close_inode_openhandle()) Skipped 8 previous similar messages Lustre: dir [0x2c0000be8:0x2b9:0x0] stripe 2 readdir failed: -2, directory is partially accessed! LustreError: 452633:0:(lcommon_cl.c:179:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000be9:0x31e:0x0]: rc = -5 LustreError: 452633:0:(lcommon_cl.c:179:cl_file_inode_init()) Skipped 2 previous similar messages LustreError: 452633:0:(llite_lib.c:3698:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 452633:0:(llite_lib.c:3698:ll_prep_inode()) Skipped 2 previous similar messages Lustre: dir [0x200000bea:0x40e:0x0] stripe 2 readdir failed: -2, directory is partially accessed! LustreError: 452189:0:(mdc_request.c:1479:mdc_read_page()) lustre-MDT0003-mdc-ffff9755baefb800: dir page locate: [0x2c0000bd1:0x71:0x0] at 0: rc -5 LustreError: 457866:0:(mdc_request.c:1479:mdc_read_page()) lustre-MDT0001-mdc-ffff9755cb6e6800: dir page locate: [0x240000bd1:0x73:0x0] at 0: rc -5 LustreError: 457866:0:(mdc_request.c:1479:mdc_read_page()) Skipped 1 previous similar message LustreError: 466458:0:(lcommon_cl.c:179:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000be6:0x387:0x0]: rc = -5 LustreError: 466458:0:(lcommon_cl.c:179:cl_file_inode_init()) Skipped 4 previous similar messages LustreError: 466458:0:(llite_lib.c:3698:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 466458:0:(llite_lib.c:3698:ll_prep_inode()) Skipped 4 previous similar messages LustreError: lustre-OST0001-osc-ffff9755cb6e6800: operation ost_sync to node 10.240.22.242@tcp failed: rc = -107 Lustre: lustre-OST0001-osc-ffff9755cb6e6800: Connection to lustre-OST0001 (at 10.240.22.242@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-OST0001-osc-ffff9755cb6e6800: This client was evicted by lustre-OST0001; in progress operations using this service will fail. LustreError: 681484:0:(ldlm_resource.c:982:ldlm_resource_complain()) lustre-OST0001-osc-ffff9755cb6e6800: namespace resource [0x340000407:0x1c7d:0x0].0x0 (ffff9755ee3e3540) refcount nonzero (2) after lock cleanup; forcing cleanup. LustreError: lustre-MDT0003-mdc-ffff9755cb6e6800: operation mds_close to node 10.240.22.246@tcp failed: rc = -107 Lustre: lustre-MDT0003-mdc-ffff9755cb6e6800: Connection to lustre-MDT0003 (at 10.240.22.246@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0003-mdc-ffff9755cb6e6800: This client was evicted by lustre-MDT0003; in progress operations using this service will fail. | Link to test |
racer test 2: racer rename: onyx-146vm1.onyx.whamcloud.com,onyx-146vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 1007119 Comm: ll_sa_1006972 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.44.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 63 60 5b c3 cc cc cc cc 48 89 df e8 25 19 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffbc04c60dbe08 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008010000e RDX: 000000008010000f RSI: ffff9a4c6001d770 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000001 R10: ffff9a4cc611a200 R11: 0000000000000000 R12: ffff9a4c6001d490 R13: ffff9a4cc611a298 R14: ffff9a4cc611a200 R15: ffff9a4cc611a2a8 FS: 0000000000000000(0000) GS:ffff9a4cfbc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000005b210002 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c9/0x2220 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver intel_rapl_msr nfs intel_rapl_common lockd grace fscache crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix crc32c_intel libata virtio_net serio_raw virtio_blk net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 743827:0:(lcommon_cl.c:179:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000beb:0x57e:0x0]: rc = -5 LustreError: 743827:0:(lcommon_cl.c:179:cl_file_inode_init()) Skipped 28 previous similar messages LustreError: 743827:0:(llite_lib.c:3695:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 743827:0:(llite_lib.c:3695:ll_prep_inode()) Skipped 28 previous similar messages Lustre: dir [0x200000beb:0x1628:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 2 previous similar messages Autotest: Test running for 255 minutes (lustre-reviews_review-dne-part-9_112356.11) Lustre: dir [0x280000bec:0x3062:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 7 previous similar messages | Link to test |
racer test 2: racer rename: onyx-150vm1.onyx.whamcloud.com,onyx-150vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 613268 Comm: ll_sa_613125 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.44.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 03 76 5b c3 cc cc cc cc 48 89 df e8 25 19 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffa965c802be08 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008010000e RDX: 000000008010000f RSI: ffff9504307bf570 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000001 R10: ffff950363312a00 R11: 0000000000000000 R12: ffff9504307bf290 R13: ffff950363312a98 R14: ffff950363312a00 R15: ffff950363312aa8 FS: 0000000000000000(0000) GS:ffff95043bc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000000bc010003 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c9/0x2220 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs intel_rapl_msr lockd intel_rapl_common grace fscache crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net net_failover serio_raw failover virtio_blk [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c | Link to test |
racer test 2: racer rename: trevis-23vm4.trevis.whamcloud.com,trevis-23vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 816608 Comm: ll_sa_816492 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.44.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 83 4b 5b c3 cc cc cc cc 48 89 df e8 25 19 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb7228471fe08 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000006 RDX: ffff8cf9ffc38160 RSI: ffff8cf988aaef70 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff8cf941003180 R09: fffffaf8c0e92c80 R10: 0000000000000000 R11: 000000000000000f R12: ffff8cf988aaec90 R13: ffff8cf952e67098 R14: ffff8cf952e67000 R15: ffff8cf952e670a8 FS: 0000000000000000(0000) GS:ffff8cf9ffc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000001a410002 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c9/0x2220 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel virtio_balloon i2c_piix4 joydev pcspkr sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net serio_raw virtio_blk net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 92179:0:(lcommon_cl.c:179:cl_file_inode_init()) lustre: failed to initialize cl_object [0x2c0000be4:0x963:0x0]: rc = -5 LustreError: 92179:0:(lcommon_cl.c:179:cl_file_inode_init()) Skipped 17 previous similar messages LustreError: 92179:0:(llite_lib.c:3695:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 92179:0:(llite_lib.c:3695:ll_prep_inode()) Skipped 17 previous similar messages LustreError: 92179:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 16 [0x2c0000be4:0x963:0x0] inode@0000000000000000: rc = -5 LustreError: 576924:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8cf98fabf000: cannot apply new layout on [0x240000be8:0x304:0x0] : rc = -5 Autotest: Test running for 395 minutes (lustre-reviews_review-dne-part-9_112015.11) LustreError: 584326:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff8cf98fabf000: cannot apply new layout on [0x240000be8:0x304:0x0] : rc = -5 LustreError: 585127:0:(llite_nfs.c:430:ll_dir_get_parent_fid()) lustre: failure inode [0x2c0000be4:0xb66:0x0] get parent: rc = -116 Lustre: dir [0x200000be9:0x23a2:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 74 previous similar messages Lustre: dir [0x280000be8:0x16e8:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Autotest: Test running for 400 minutes (lustre-reviews_review-dne-part-9_112015.11) | Link to test |
racer test 2: racer rename: onyx-55vm1.onyx.whamcloud.com,onyx-55vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 534493 Comm: ll_sa_534362 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.44.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 d3 6a 43 47 5b c3 cc cc cc cc 48 89 df e8 25 19 af ff 39 05 23 8f 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb62806e0fe08 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008010000e RDX: 000000008010000f RSI: ffff921e1e398970 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000001 R10: ffff921dd8815800 R11: 0000000000000000 R12: ffff921e1e398690 R13: ffff921dd8815898 R14: ffff921dd8815800 R15: ffff921dd88158a8 FS: 0000000000000000(0000) GS:ffff921e3fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000062410005 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c9/0x2220 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel serio_raw virtio_net virtio_blk net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 419940:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 14 [0x280000bea:0x881:0x0] inode@0000000000000000: rc = -5 LustreError: 419940:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 6 previous similar messages LustreError: 435283:0:(lcommon_cl.c:179:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000be9:0xac6:0x0]: rc = -5 LustreError: 435283:0:(lcommon_cl.c:179:cl_file_inode_init()) Skipped 38 previous similar messages LustreError: 435283:0:(llite_lib.c:3692:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 435283:0:(llite_lib.c:3692:ll_prep_inode()) Skipped 38 previous similar messages Lustre: dir [0x280000bea:0x15cd:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 4 previous similar messages LustreError: 380208:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 19 [0x0:0x0:0x0] inode@0000000000000000: rc = -1 LustreError: 380208:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 11 previous similar messages | Link to test |
racer test 2: racer rename: onyx-45vm1.onyx.whamcloud.com,onyx-45vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 612513 Comm: ll_sa_612372 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.40.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 e3 6a 23 7b 5b c3 cc cc cc cc 48 89 df e8 e5 18 af ff 39 05 f3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffae9384ed7e10 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008010000b RDX: 000000008010000c RSI: ffff98851ddd3970 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000001 R10: ffff988523b65000 R11: 0000000000000000 R12: ffff98851ddd3690 R13: ffff988523b65098 R14: ffff988523b65000 R15: ffff988523b650a8 FS: 0000000000000000(0000) GS:ffff98853fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000057610002 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic crc32c_intel ata_piix libata serio_raw virtio_net virtio_blk net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: lustre-OST0001-osc-ffff9884bce20000: operation ldlm_enqueue to node 10.240.23.246@tcp failed: rc = -107 Lustre: lustre-OST0001-osc-ffff9884bce20000: Connection to lustre-OST0001 (at 10.240.23.246@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-OST0001-osc-ffff9884bce20000: This client was evicted by lustre-OST0001; in progress operations using this service will fail. Lustre: lustre-OST0001-osc-ffff9884bce20000: Connection restored to (at 10.240.23.246@tcp) LustreError: 397301:0:(lcommon_cl.c:179:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000be6:0x13e7:0x0]: rc = -5 LustreError: 397301:0:(llite_lib.c:3715:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 66532:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 2 [0x0:0x0:0x0] inode@0000000000000000: rc = -1 LustreError: 66532:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 2 [0x0:0x0:0x0] inode@0000000000000000: rc = -1 LustreError: 66532:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 2 [0x0:0x0:0x0] inode@0000000000000000: rc = -1 LustreError: 66532:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 2 [0x0:0x0:0x0] inode@0000000000000000: rc = -1 LustreError: 66532:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 2 [0x0:0x0:0x0] inode@0000000000000000: rc = -1 LustreError: 66532:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 2 [0x0:0x0:0x0] inode@0000000000000000: rc = -1 Autotest: Test running for 300 minutes (lustre-reviews_review-dne-part-9_111811.11) Lustre: dir [0x2c0000be6:0x21b0:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 23 previous similar messages LustreError: 66532:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 2 [0x0:0x0:0x0] inode@0000000000000000: rc = -1 LustreError: 66532:0:(statahead.c:825:ll_statahead_interpret_work()) Skipped 1 previous similar message | Link to test |
racer test 2: racer rename: trevis-24vm4.trevis.whamcloud.com,trevis-24vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 780152 Comm: ll_sa_780001 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.40.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 e3 6a 83 6e 5b c3 cc cc cc cc 48 89 df e8 e5 18 af ff 39 05 f3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb5b644d33e10 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000000000b RDX: ffff954a3fc38160 RSI: ffff954a27915170 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff954981003180 R09: fffffb19029c7e80 R10: 0000000000000000 R11: 000000000000000f R12: ffff954a27914e90 R13: ffff9549bc9d1c98 R14: ffff9549bc9d1c00 R15: ffff9549bc9d1ca8 FS: 0000000000000000(0000) GS:ffff954a3fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000085410004 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel virtio_balloon i2c_piix4 joydev pcspkr sunrpc ext4 ata_generic mbcache jbd2 ata_piix libata crc32c_intel virtio_net serio_raw virtio_blk net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x240000be6:0x6d4:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 6 previous similar messages LustreError: 549248:0:(statahead.c:825:ll_statahead_interpret_work()) lustre: failed to prep 5 [0x2c0000be9:0x59a:0x0] inode@0000000000000000: rc = -5 Lustre: dir [0x2c0000be9:0x1b9c:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Lustre: dir [0x200000be8:0x2289:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 2 previous similar messages | Link to test |
racer test 2: racer rename: onyx-52vm2.onyx.whamcloud.com,onyx-52vm3 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 630962 Comm: ll_sa_630793 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.40.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 e3 6a 83 63 5b c3 cc cc cc cc 48 89 df e8 e5 18 af ff 39 05 f3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffbb6002d37e10 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000005 RDX: ffff99cfffc38160 RSI: ffff99cfc2663f70 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff99cf41003180 R09: fffff86cc11dc900 R10: 0000000000000000 R11: 000000000000000f R12: ffff99cfc2663c90 R13: ffff99cf54939898 R14: ffff99cf54939800 R15: ffff99cf549398a8 FS: 0000000000000000(0000) GS:ffff99cfffc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000002ba10001 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev pcspkr virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net serio_raw net_failover failover virtio_blk CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x20000041c:0x52b:0x0] stripe 1 readdir failed: -2, directory is partially accessed! | Link to test |
racer test 2: racer rename: onyx-37vm1.onyx.whamcloud.com,onyx-37vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 154506 Comm: ll_sa_154436 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.40.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 e3 6a a3 55 5b c3 cc cc cc cc 48 89 df e8 e5 18 af ff 39 05 f3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffab19429bbe10 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000008 RDX: ffff92e3bfd38160 RSI: ffff92e32e940970 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff92e301003180 R09: ffffcd234042a500 R10: 0000000000000000 R11: 000000000000000f R12: ffff92e32e940690 R13: ffff92e3252c1a98 R14: ffff92e3252c1a00 R15: ffff92e3252c1aa8 FS: 0000000000000000(0000) GS:ffff92e3bfd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000004ea10006 CR4: 00000000000606e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: lustre(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel serio_raw virtio_blk virtio_net net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=2 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=2 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=2 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=2 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x240000403:0x17d8:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Autotest: Test running for 5 minutes (lustre-reviews_custom_111373.1005) | Link to test |
racer test 2: racer rename: onyx-108vm1.onyx.whamcloud.com,onyx-108vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 36044 Comm: ll_sa_35977 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.40.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 e3 6a 23 7a 5b c3 cc cc cc cc 48 89 df e8 e5 18 af ff 39 05 f3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffa29b452bbe10 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000006 RDX: ffff89137fd38160 RSI: ffff8912e0f3ef70 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff8912c1003180 R09: fffffa0000af7800 R10: 0000000000000000 R11: 000000000000000f R12: ffff8912e0f3ec90 R13: ffff8912e3ebe298 R14: ffff8912e3ebe200 R15: ffff8912e3ebe2a8 FS: 0000000000000000(0000) GS:ffff89137fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000039610003 CR4: 00000000001706e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: lustre(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr virtio_balloon joydev i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic virtio_net crc32c_intel ata_piix serio_raw virtio_blk libata net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=2 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=2 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=2 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=2 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c | Link to test |
racer test 1: racer on clients: onyx-138vm4.onyx.whamcloud.com,onyx-138vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 541813 Comm: ll_sa_541572 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.37.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 e3 6a e3 6a 5b c3 cc cc cc cc 48 89 df e8 e5 18 af ff 39 05 f3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffa997c6e97e10 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000001 RDX: ffff92273bc38160 RSI: ffff922692e25170 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff922700003180 R09: 0000000000000000 R10: 0000000000000000 R11: 000000000000000f R12: ffff922692e24e90 R13: ffff92260d9f4298 R14: ffff92260d9f4200 R15: ffff92260d9f42a8 FS: 0000000000000000(0000) GS:ffff92273bc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000027810002 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 intel_rapl_msr dns_resolver intel_rapl_common crct10dif_pclmul nfs lockd grace fscache crc32_pclmul ghash_clmulni_intel joydev pcspkr virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel serio_raw virtio_net virtio_blk net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_ENABLE_REMOTE_DIRS=true RACER_ENABLE_STRIPED_DIRS=true RACER_ENABLE_MIGRATION=true RACER_ENABLE_FILE_MIGRATE=true RACER_ENABLE_PFL=true RACER_ENABLE_DOM=true RACER_ENABLE_FLR=true RACER_MA LustreError: 81474:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9226a23e2800: inode [0x200000bec:0x3:0x0] mdc close failed: rc = -116 LustreError: 81994:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9226a23e2800: inode [0x200000bec:0x3:0x0] mdc close failed: rc = -116 LustreError: 83476:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9226a23e2800: inode [0x200000bec:0x6e:0x0] mdc close failed: rc = -116 LustreError: 83476:0:(file.c:247:ll_close_inode_openhandle()) Skipped 1 previous similar message LustreError: 86211:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff922719431800: inode [0x280000be9:0x4:0x0] mdc close failed: rc = -116 LustreError: 86211:0:(file.c:247:ll_close_inode_openhandle()) Skipped 4 previous similar messages 2[93752]: segfault at 8 ip 00007f68d0240735 sp 00007fffb26bcb40 error 4 in ld-2.28.so[7f68d021f000+2f000] Code: 81 39 52 e5 74 64 0f 84 99 09 00 00 48 85 c0 75 e4 48 83 3d 64 f7 20 00 00 0f 85 aa 09 00 00 49 8b 47 68 49 8b 97 68 02 00 00 <48> 8b 40 08 48 85 d2 74 23 48 8b 72 08 48 01 c6 80 3e 00 74 17 48 LustreError: 97079:0:(lcommon_cl.c:177:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000be8:0x65:0x0]: rc = -5 LustreError: 97079:0:(llite_lib.c:3712:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: lustre-MDT0002-mdc-ffff9226a23e2800: operation ldlm_enqueue to node 10.240.25.225@tcp failed: rc = -107 Lustre: lustre-MDT0002-mdc-ffff9226a23e2800: Connection to lustre-MDT0002 (at 10.240.25.225@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0002-mdc-ffff9226a23e2800: This client was evicted by lustre-MDT0002; in progress operations using this service will fail. LustreError: 91157:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9226a23e2800: inode [0x280000be7:0xed:0x0] mdc close failed: rc = -108 LustreError: 91157:0:(file.c:247:ll_close_inode_openhandle()) Skipped 2 previous similar messages LustreError: 94103:0:(file.c:5986:ll_inode_revalidate_fini()) lustre: revalidate FID [0x280000be8:0x7a:0x0] error: rc = -5 Lustre: lustre-MDT0002-mdc-ffff9226a23e2800: Connection restored to 10.240.25.225@tcp (at 10.240.25.225@tcp) LustreError: lustre-MDT0000-mdc-ffff9226a23e2800: operation ldlm_enqueue to node 10.240.25.225@tcp failed: rc = -107 LustreError: Skipped 2 previous similar messages Lustre: lustre-MDT0000-mdc-ffff9226a23e2800: Connection to lustre-MDT0000 (at 10.240.25.225@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff9226a23e2800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 97814:0:(file.c:5986:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200000bed:0x130:0x0] error: rc = -5 LustreError: 93120:0:(llite_lib.c:1997:ll_md_setattr()) md_setattr fails: rc = -108 LustreError: 97814:0:(file.c:5986:ll_inode_revalidate_fini()) Skipped 5 previous similar messages LustreError: 102566:0:(mdc_request.c:1457:mdc_read_page()) lustre-MDT0000-mdc-ffff9226a23e2800: [0x200000bea:0x2:0x0] lock enqueue fails: rc = -108 LustreError: 102566:0:(statahead.c:1801:is_first_dirent()) lustre: reading dir [0x200000bea:0x2:0x0] at 0 stat_pid = 96113 : rc = -108 LustreError: 99854:0:(vvp_io.c:1903:vvp_io_init()) lustre: refresh file layout [0x200000bea:0xac:0x0] error -108. Lustre: dir [0x2c0000be8:0xb0:0x0] stripe 1 readdir failed: -108, directory is partially accessed! LustreError: 96719:0:(llite_nfs.c:426:ll_dir_get_parent_fid()) lustre: failure inode [0x200000bec:0x138:0x0] get parent: rc = -108 LustreError: 96719:0:(llite_nfs.c:426:ll_dir_get_parent_fid()) Skipped 2 previous similar messages Lustre: lustre-MDT0000-mdc-ffff9226a23e2800: Connection restored to 10.240.25.225@tcp (at 10.240.25.225@tcp) LustreError: 464861:0:(lcommon_cl.c:177:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000bea:0x118:0x0]: rc = -5 LustreError: 464861:0:(llite_lib.c:3712:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 94896:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff922719431800: inode [0x200000bea:0xb0:0x0] mdc close failed: rc = -2 LustreError: 94896:0:(file.c:247:ll_close_inode_openhandle()) Skipped 29 previous similar messages LustreError: 488316:0:(lcommon_cl.c:177:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000bea:0x118:0x0]: rc = -5 LustreError: 488316:0:(llite_lib.c:3712:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 498461:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9226a23e2800: inode [0x200000beb:0x2a0:0x0] mdc close failed: rc = -116 LustreError: 498461:0:(file.c:247:ll_close_inode_openhandle()) Skipped 6 previous similar messages Autotest: Test running for 310 minutes (lustre-reviews_review-dne-part-9_111171.11) LustreError: 527877:0:(file.c:247:ll_close_inode_openhandle()) lustre-clilmv-ffff9226a23e2800: inode [0x280000be6:0x3b0:0x0] mdc close failed: rc = -116 LustreError: 527877:0:(file.c:247:ll_close_inode_openhandle()) Skipped 12 previous similar messages | Link to test |
racer test 2: racer rename: trevis-36vm1.trevis.whamcloud.com,trevis-36vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 296056 Comm: ll_sa_295925 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.37.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 e3 6a 23 68 5b c3 cc cc cc cc 48 89 df e8 e5 18 af ff 39 05 f3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb32b0792fe10 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000004 RDX: ffff9469bfd38160 RSI: ffff9469456d0f70 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff946901003180 R09: ffffdf5b00110d00 R10: 0000000000000000 R11: 000000000000000f R12: ffff9469456d0c90 R13: ffff94698cc89698 R14: ffff94698cc89600 R15: ffff94698cc896a8 FS: 0000000000000000(0000) GS:ffff9469bfd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000089610001 CR4: 00000000000606e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel virtio_balloon joydev pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net serio_raw net_failover virtio_blk failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x200000be9:0x13c0:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message | Link to test |
racer test 2: racer rename: onyx-80vm1.onyx.whamcloud.com,onyx-80vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 1016854 Comm: ll_sa_1016763 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.27.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 b3 6a 83 56 5b c3 cc cc cc cc 48 89 df e8 95 15 af ff 39 05 c3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb98d452d3e10 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000008 RDX: ffff99b93fd38160 RSI: ffff99b91d62fb70 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff99b881003180 R09: fffff0eec1209d00 R10: 0000000000000000 R11: 000000000000000f R12: ffff99b91d62f890 R13: ffff99b916a9d698 R14: ffff99b916a9d600 R15: ffff99b916a9d6a8 FS: 0000000000000000(0000) GS:ffff99b93fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000004c210003 CR4: 00000000001706e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr joydev virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_blk serio_raw virtio_net net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x200000bee:0x168c:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 2 previous similar messages | Link to test |
racer test 2: racer rename: onyx-58vm1.onyx.whamcloud.com,onyx-58vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 567944 Comm: ll_sa_567793 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.27.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 b3 6a e3 48 5b c3 cc cc cc cc 48 89 df e8 95 15 af ff 39 05 c3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb9d882bdbe10 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000010000c RDX: 000000000010000d RSI: ffff9cef7c85b370 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9cef88c58a00 R11: 0000000000000000 R12: ffff9cef7c85b090 R13: ffff9cef88c58a98 R14: ffff9cef88c58a00 R15: ffff9cef88c58aa8 FS: 0000000000000000(0000) GS:ffff9cefffd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000042a10002 CR4: 00000000000606e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata virtio_net crc32c_intel serio_raw net_failover virtio_blk failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 349513:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff9cef92319800: cannot apply new layout on [0x2c0000be5:0x10e6:0x0] : rc = -5 Autotest: Test running for 270 minutes (lustre-reviews_review-dne-part-9_110121.11) Lustre: dir [0x280000be6:0x34dd:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 14 previous similar messages | Link to test |
racer test 2: racer rename: onyx-144vm1.onyx.whamcloud.com,onyx-144vm2 DURATION=900 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 3235121 Comm: ll_sa_3234929 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.27.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 b3 6a c3 49 5b c3 cc cc cc cc 48 89 df e8 95 15 af ff 39 05 c3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb8b185517e10 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000008 RDX: ffff9350fbd38160 RSI: ffff934fc938f570 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff9350c0003180 R09: fffff74ec0d6e480 R10: 0000000000000000 R11: 000000000000000f R12: ffff934fc938f290 R13: ffff934fd8444498 R14: ffff934fd8444400 R15: ffff934fd84444a8 FS: 0000000000000000(0000) GS:ffff9350fbd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000007ca10005 CR4: 00000000003706e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: loop ib_core nfsv3 nfsd nfs_acl lustre(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs intel_rapl_msr lockd grace fscache intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev pcspkr virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix crc32c_intel libata serio_raw virtio_blk virtio_net net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c | Link to test |
racer test 2: racer rename: onyx-135vm1.onyx.whamcloud.com,onyx-135vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 795465 Comm: ll_sa_795361 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.27.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 b3 6a 83 59 5b c3 cc cc cc cc 48 89 df e8 95 15 af ff 39 05 c3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffab4903223e10 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008010000a RDX: 000000008010000b RSI: ffff9a32292fe370 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000001 R10: ffff9a3262482000 R11: 0000000000000000 R12: ffff9a32292fe090 R13: ffff9a3262482098 R14: ffff9a3262482000 R15: ffff9a32624820a8 FS: 0000000000000000(0000) GS:ffff9a32fbc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000133410004 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss intel_rapl_msr nfsv4 intel_rapl_common crct10dif_pclmul dns_resolver nfs crc32_pclmul lockd grace fscache ghash_clmulni_intel joydev pcspkr i2c_piix4 virtio_balloon sunrpc ata_generic ext4 mbcache jbd2 ata_piix libata virtio_net crc32c_intel net_failover failover virtio_blk serio_raw CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Autotest: Test running for 240 minutes (lustre-reviews_review-dne-part-9_109854.11) Lustre: dir [0x200000bee:0x362d:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 64 previous similar messages | Link to test |
racer test 2: racer rename: onyx-124vm1.onyx.whamcloud.com,onyx-124vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 1018477 Comm: ll_sa_1018349 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.27.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 b3 6a e3 59 5b c3 cc cc cc cc 48 89 df e8 95 15 af ff 39 05 c3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffbd700495be10 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100009 RDX: 000000000010000a RSI: ffff9d88f8f72d70 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9d88ab8cb200 R11: 0000000000000000 R12: ffff9d88f8f72a90 R13: ffff9d88ab8cb298 R14: ffff9d88ab8cb200 R15: ffff9d88ab8cb2a8 FS: 0000000000000000(0000) GS:ffff9d893fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000005e610004 CR4: 00000000001706f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x271/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver intel_rapl_msr nfs lockd intel_rapl_common grace fscache crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata virtio_net crc32c_intel serio_raw net_failover virtio_blk failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 906512:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x2c0000be4:0x8fb:0x0]: rc = -5 LustreError: 906512:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 51 previous similar messages LustreError: 906512:0:(llite_lib.c:3736:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 906512:0:(llite_lib.c:3736:ll_prep_inode()) Skipped 51 previous similar messages LustreError: 169354:0:(statahead.c:836:ll_statahead_interpret_work()) lustre: failed to prep 15 [0x280000be7:0x6dd:0x0] inode@0000000000000000: rc = -5 Autotest: Test running for 190 minutes (lustre-reviews_review-dne-part-9_109832.11) Lustre: dir [0x240000bea:0x1838:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 6 previous similar messages | Link to test |
racer test 2: racer rename: onyx-150vm5.onyx.whamcloud.com,onyx-150vm6 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 718877 Comm: ll_sa_718725 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.27.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 b3 6a 43 7b 5b c3 cc cc cc cc 48 89 df e8 95 15 af ff 39 05 c3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffae4545f8fe10 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000008 RDX: ffff9344bbc38160 RSI: ffff9344b568a770 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff934480003180 R09: ffffde9a44b25280 R10: 0000000000000000 R11: 000000000000000f R12: ffff9344b568a490 R13: ffff93439a8d7e98 R14: ffff93439a8d7e00 R15: ffff93439a8d7ea8 FS: 0000000000000000(0000) GS:ffff9344bbc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000044810002 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd intel_rapl_msr grace fscache intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev pcspkr i2c_piix4 virtio_balloon sunrpc ata_generic ext4 mbcache jbd2 ata_piix libata crc32c_intel virtio_net net_failover serio_raw failover virtio_blk [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 675891:0:(mdc_request.c:1469:mdc_read_page()) lustre-MDT0002-mdc-ffff93449fdb8800: dir page locate: [0x280000bd5:0x8b:0x0] at 0: rc -5 LustreError: 675891:0:(mdc_request.c:1469:mdc_read_page()) Skipped 5 previous similar messages LustreError: 144257:0:(statahead.c:836:ll_statahead_interpret_work()) lustre: failed to prep 19 [0x240000be9:0x102b:0x0] inode@0000000000000000: rc = -5 Lustre: dir [0x280000be6:0x12fd:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 24 previous similar messages | Link to test |
racer test 2: racer rename: onyx-147vm3.onyx.whamcloud.com,onyx-147vm4 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 929220 Comm: ll_sa_929091 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.27.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 b3 6a a3 5c 5b c3 cc cc cc cc 48 89 df e8 95 15 af ff 39 05 c3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb59b4716be10 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000003 RDX: ffffa092fbc38160 RSI: ffffa0927f550f70 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffffa092c0003180 R09: ffffeafdc4e74000 R10: 0000000000000000 R11: 000000000000000f R12: ffffa0927f550c90 R13: ffffa092c454e298 R14: ffffa092c454e200 R15: ffffa092c454e2a8 FS: 0000000000000000(0000) GS:ffffa092fbc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000038c10004 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 intel_rapl_msr intel_rapl_common dns_resolver crct10dif_pclmul nfs crc32_pclmul lockd grace fscache ghash_clmulni_intel joydev pcspkr virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic virtio_net crc32c_intel ata_piix libata net_failover serio_raw virtio_blk failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c | Link to test |
racer test 2: racer rename: onyx-76vm1.onyx.whamcloud.com,onyx-76vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 814297 Comm: ll_sa_814150 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.27.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 b3 6a 63 74 5b c3 cc cc cc cc 48 89 df e8 95 15 af ff 39 05 c3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb41e41ae7e10 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000003 RDX: ffff997f3fc38160 RSI: ffff997f26c10370 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff997e81003180 R09: ffffd7ea400fda00 R10: 0000000000000000 R11: 000000000000000f R12: ffff997f26c10090 R13: ffff997eb1b5a698 R14: ffff997eb1b5a600 R15: ffff997eb1b5a6a8 FS: 0000000000000000(0000) GS:ffff997f3fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000088810005 CR4: 00000000001706f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? apic_timer_interrupt+0xa/0x20 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul dns_resolver nfs lockd grace fscache ghash_clmulni_intel joydev pcspkr i2c_piix4 virtio_balloon sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net serio_raw virtio_blk net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c | Link to test |
racer test 2: racer rename: onyx-59vm4.onyx.whamcloud.com,onyx-59vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 477969 Comm: ll_sa_477873 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.27.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 b3 6a 03 48 5b c3 cc cc cc cc 48 89 df e8 95 15 af ff 39 05 c3 8e 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffa71685487e10 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000003 RDX: ffff91c8bfc38160 RSI: ffff91c86e5d0370 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff91c801003180 R09: fffff5ee40f4c280 R10: 0000000000000000 R11: 000000000000000f R12: ffff91c86e5d0090 R13: ffff91c8512dc098 R14: ffff91c8512dc000 R15: ffff91c8512dc0a8 FS: 0000000000000000(0000) GS:ffff91c8bfc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000009ac10005 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x6c4/0x2210 [lustre] ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev pcspkr virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net serio_raw net_failover virtio_blk failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x280000417:0x8d1:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Lustre: dir [0x280000416:0xf21:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 2 previous similar messages Lustre: dir [0x2c0000416:0xdb9:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: dir [0x200000419:0x11b3:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Lustre: dir [0x280000418:0x137a:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 2 previous similar messages Lustre: dir [0x2c0000414:0x107f:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 3 previous similar messages | Link to test |
racer test 2: racer rename: onyx-77vm1.onyx.whamcloud.com,onyx-77vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 728580 Comm: ll_sa_728442 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.16.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 03 77 83 6e 5b c3 cc cc cc cc 48 89 df e8 25 22 af ff 39 05 13 a1 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb1d986dcfe08 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000009 RDX: ffff9f997cc38160 RSI: ffff9f98fbfc2170 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff9f98c1003180 R09: ffffee23c25a0780 R10: 0000000000000000 R11: 000000000000000f R12: ffff9f9959358200 R13: ffff9f9959358298 R14: ffff9f98fbfc1e90 R15: ffff9f99593582a8 FS: 0000000000000000(0000) GS:ffff9f997cc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000000b9c10001 CR4: 00000000001706f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul nfsv4 dns_resolver nfs lockd grace fscache ghash_clmulni_intel i2c_piix4 joydev pcspkr virtio_balloon sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net serio_raw net_failover virtio_blk failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 583389:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000bee:0x15dd:0x0]: rc = -5 LustreError: 583389:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 68 previous similar messages LustreError: 583389:0:(llite_lib.c:3731:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 583389:0:(llite_lib.c:3731:ll_prep_inode()) Skipped 68 previous similar messages Lustre: dir [0x240000beb:0x167d:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Autotest: Test running for 220 minutes (lustre-reviews_review-dne-part-9_109407.11) | Link to test |
racer test 2: racer rename: onyx-122vm7,onyx-80vm10.onyx.whamcloud.com DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 931931 Comm: ll_sa_931799 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.16.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 03 77 c3 59 5b c3 cc cc cc cc 48 89 df e8 25 22 af ff 39 05 13 a1 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffff98164190fe08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000004 RDX: ffff8c0939738160 RSI: ffff8c08d100ad70 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff8c0881003180 R09: ffffc7e940d8c380 R10: 0000000000000000 R11: 000000000000000f R12: ffff8c08e7e16e00 R13: ffff8c08e7e16e98 R14: ffff8c08d100aa90 R15: ffff8c08e7e16ea8 FS: 0000000000000000(0000) GS:ffff8c0939700000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000000bb210003 CR4: 00000000001706e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata virtio_net crc32c_intel net_failover virtio_blk serio_raw failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 861764:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x2c0000bea:0xa79:0x0]: rc = -5 LustreError: 861764:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 24 previous similar messages LustreError: 861764:0:(llite_lib.c:3731:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 861764:0:(llite_lib.c:3731:ll_prep_inode()) Skipped 24 previous similar messages Lustre: dir [0x2c0000be7:0x8c6:0x0] stripe 0 readdir failed: -2, directory is partially accessed! LustreError: 146165:0:(statahead.c:830:ll_statahead_interpret_work()) lustre: getattr callback for 9 [0x240000be7:0x896:0x0]: rc = -5 Lustre: dir [0x2c0000bea:0xf31:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 4 previous similar messages | Link to test |
racer test 2: racer rename: trevis-106vm7.trevis.whamcloud.com,trevis-106vm8 DURATION=900 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 1296891 Comm: ll_sa_1296729 Kdump: loaded Tainted: G W OE --------- - - 4.18.0-477.27.1.el8_8.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 b3 b4 e5 75 5b e9 1d a2 42 00 48 89 df e8 b5 98 b0 ff 39 05 83 dc 5d 01 77 e3 5b e9 07 a2 42 00 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d e9 e5 RSP: 0018:ffffa3f488a77e10 EFLAGS: 00010216 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100005 RDX: 0000000000100006 RSI: ffff94252d790370 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000000 R09: ffffffffc1045c00 R10: ffff9425020f3000 R11: 0000000000000001 R12: ffff94252d790090 R13: ffff9425020f3098 R14: ffff9425020f3000 R15: ffff9425020f30a8 FS: 0000000000000000(0000) GS:ffff94257fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000055810003 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ll_statahead_thread+0x6c4/0x2210 [lustre] ? __raw_spin_unlock_irq+0x5/0x20 ? finish_task_switch+0x86/0x2e0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl lustre(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) ib_core tcp_diag inet_diag loop rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver intel_rapl_msr intel_rapl_common nfs crct10dif_pclmul lockd grace fscache crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net net_failover virtio_blk serio_raw failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x200006997:0x1296:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 119 previous similar messages | Link to test |
racer test 2: racer rename: onyx-141vm4.onyx.whamcloud.com,onyx-141vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 1051378 Comm: ll_sa_1051249 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.16.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 03 77 63 70 5b c3 cc cc cc cc 48 89 df e8 25 22 af ff 39 05 13 a1 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffbccf0895fe08 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000007 RDX: ffff97fd7bd38160 RSI: ffff97fd76321b70 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff97fd40003180 R09: ffffe91f84b4aa00 R10: 0000000000000000 R11: 000000000000000f R12: ffff97fd7a5fd800 R13: ffff97fd7a5fd898 R14: ffff97fd76321890 R15: ffff97fd7a5fd8a8 FS: 0000000000000000(0000) GS:ffff97fd7bd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000000b8e10006 CR4: 00000000003706e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net net_failover virtio_blk failover serio_raw [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 938910:0:(lov_object.c:1341:lov_layout_change()) lustre-clilov-ffff97fd6e105800: cannot apply new layout on [0x200000beb:0xc32:0x0] : rc = -5 LustreError: 938910:0:(lov_object.c:1341:lov_layout_change()) Skipped 3 previous similar messages Lustre: dir [0x200000beb:0x1170:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 26 previous similar messages Autotest: Test running for 215 minutes (lustre-reviews_review-dne-part-9_109245.11) Lustre: dir [0x200000bed:0x1c32:0x0] stripe 2 readdir failed: -2, directory is partially accessed! | Link to test |
racer test 2: racer rename: onyx-136vm1.onyx.whamcloud.com,onyx-136vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 845582 Comm: ll_sa_845413 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.16.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 03 77 a3 69 5b c3 cc cc cc cc 48 89 df e8 25 22 af ff 39 05 13 a1 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffa9a1c62b7e08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008010000e RDX: 000000008010000f RSI: ffff8feffbf08370 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000001 R10: ffff8fef87787600 R11: 0000000000000000 R12: ffff8fef87787600 R13: ffff8fef87787698 R14: ffff8feffbf08090 R15: ffff8fef877876a8 FS: 0000000000000000(0000) GS:ffff8ff0bbd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000065810004 CR4: 00000000003706e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev i2c_piix4 pcspkr virtio_balloon sunrpc ext4 mbcache jbd2 ata_generic ata_piix crc32c_intel libata serio_raw virtio_net virtio_blk net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x200000bec:0x203d:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 8 previous similar messages Lustre: dir [0x280000be7:0x23a8:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Lustre: dir [0x280000be8:0x3252:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 12 previous similar messages | Link to test |
racer test 2: racer rename: trevis-66vm1.trevis.whamcloud.com,trevis-66vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 627623 Comm: ll_sa_627489 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.16.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 03 77 43 6b 5b c3 cc cc cc cc 48 89 df e8 25 22 af ff 39 05 13 a1 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffff974684f97e08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100009 RDX: 000000000010000a RSI: ffff888e67757570 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff888e64701600 R11: 0000000000000000 R12: ffff888e64701600 R13: ffff888e64701698 R14: ffff888e67757290 R15: ffff888e647016a8 FS: 0000000000000000(0000) GS:ffff888e7fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000002da10002 CR4: 00000000000606e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel virtio_balloon joydev pcspkr i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_blk serio_raw virtio_net net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 595113:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000be9:0x4f8:0x0]: rc = -5 LustreError: 595113:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 6 previous similar messages LustreError: 595113:0:(llite_lib.c:3731:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 595113:0:(llite_lib.c:3731:ll_prep_inode()) Skipped 6 previous similar messages Lustre: dir [0x280000be6:0xa46:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 3 previous similar messages LustreError: 334345:0:(statahead.c:830:ll_statahead_interpret_work()) lustre: getattr callback for sleep [0x280000be9:0x201:0x0]: rc = -5 LustreError: 334345:0:(statahead.c:830:ll_statahead_interpret_work()) lustre: getattr callback for sleep [0x280000be9:0x201:0x0]: rc = -5 | Link to test |
racer test 2: racer rename: trevis-81vm1.trevis.whamcloud.com,trevis-81vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 1122675 Comm: ll_sa_1122517 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.16.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 03 77 63 43 5b c3 cc cc cc cc 48 89 df e8 25 22 af ff 39 05 13 a1 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffbbaac69cfe08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100004 RDX: 0000000000100005 RSI: ffff9c5f320ee970 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9c5f9ee4e600 R11: 0000000000000000 R12: ffff9c5f9ee4e600 R13: ffff9c5f9ee4e698 R14: ffff9c5f320ee690 R15: ffff9c5f9ee4e6a8 FS: 0000000000000000(0000) GS:ffff9c5fbfc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000086210004 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr sunrpc joydev virtio_balloon i2c_piix4 ext4 mbcache jbd2 ata_generic crc32c_intel ata_piix libata virtio_net serio_raw net_failover failover virtio_blk CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x280000be6:0xa13:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 15 previous similar messages Lustre: dir [0x200000bf1:0xc37:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 4 previous similar messages Lustre: dir [0x240000be9:0x1df9:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Lustre: dir [0x200000bee:0x1ef3:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Lustre: dir [0x2c0000beb:0x1fd9:0x0] stripe 2 readdir failed: -2, directory is partially accessed! LustreError: 1010152:0:(llite_nfs.c:446:ll_dir_get_parent_fid()) lustre: failure inode [0x200000bee:0x240e:0x0] get parent: rc = -116 Autotest: Test running for 260 minutes (lustre-reviews_review-dne-part-9_109212.11) Lustre: dir [0x240000beb:0x380a:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 3 previous similar messages | Link to test |
racer test 2: racer rename: trevis-107vm1.trevis.whamcloud.com,trevis-107vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 1022337 Comm: ll_sa_1022183 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.16.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 03 77 23 5a 5b c3 cc cc cc cc 48 89 df e8 25 22 af ff 39 05 13 a1 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb3fe4283be08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000005 RDX: ffff8a5b7fc38160 RSI: ffff8a5b57908970 RDI: 0000000000000008 RBP: 0000000000000008 R08: ffff8a5ac1003180 R09: fffffc2a804fca80 R10: 0000000000000000 R11: 000000000000000f R12: ffff8a5ac3f09800 R13: ffff8a5ac3f09898 R14: ffff8a5b57908690 R15: ffff8a5ac3f098a8 FS: 0000000000000000(0000) GS:ffff8a5b7fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000009c610002 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr joydev virtio_balloon i2c_piix4 sunrpc ata_generic ext4 mbcache jbd2 ata_piix libata crc32c_intel virtio_net virtio_blk serio_raw net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 1015636:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x2c0000be6:0x2d5:0x0]: rc = -5 LustreError: 1015636:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 52 previous similar messages LustreError: 1015636:0:(llite_lib.c:3731:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 1015636:0:(llite_lib.c:3731:ll_prep_inode()) Skipped 52 previous similar messages | Link to test |
racer test 2: racer rename: trevis-107vm11.trevis.whamcloud.com,trevis-107vm12 DURATION=900 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 4154045 Comm: ll_sa_4153892 Kdump: loaded Tainted: G OE --------- - - 4.18.0-477.27.1.el8_8.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 b3 b4 05 53 5b e9 1d a2 42 00 48 89 df e8 b5 98 b0 ff 39 05 83 dc 5d 01 77 e3 5b e9 07 a2 42 00 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d e9 e5 RSP: 0018:ffffb19fc248be08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100007 RDX: 0000000000100008 RSI: ffff964147187b70 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000000 R09: ffffffffc0f74400 R10: ffff9640ead70400 R11: 0000000000000001 R12: ffff9640ead70400 R13: ffff9640ead70498 R14: ffff964147187890 R15: ffff9640ead704a8 FS: 0000000000000000(0000) GS:ffff96417fc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000004a610005 CR4: 00000000003706f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ll_statahead_thread+0x658/0x2120 [lustre] ? __raw_spin_unlock_irq+0x5/0x20 ? finish_task_switch+0x86/0x2e0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: loop ib_core nfsv3 nfsd nfs_acl lustre(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver intel_rapl_msr nfs lockd grace fscache intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr joydev virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net virtio_blk serio_raw net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x280006994:0x2fa6:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 60 previous similar messages Autotest: Test running for 1020 minutes (lustre-master_full-dne-zfs-part-2_4592.11) Lustre: dir [0x200007167:0x5f9c:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 24 previous similar messages | Link to test |
racer test 2: racer rename: trevis-129vm1.trevis.whamcloud.com,trevis-129vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 911076 Comm: ll_sa_910915 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.16.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 03 77 43 4e 5b c3 cc cc cc cc 48 89 df e8 25 22 af ff 39 05 13 a1 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb79002f7fe08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100003 RDX: 0000000000100004 RSI: ffff9c35f3106970 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9c36631bc600 R11: 0000000000000000 R12: ffff9c36631bc600 R13: ffff9c36631bc698 R14: ffff9c35f3106690 R15: ffff9c36631bc6a8 FS: 0000000000000000(0000) GS:ffff9c367fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000063e10001 CR4: 00000000000606e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev pcspkr virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel serio_raw virtio_blk virtio_net net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 756188:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x280000be8:0x13b:0x0]: rc = -5 LustreError: 756188:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 6 previous similar messages LustreError: 756188:0:(llite_lib.c:3731:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 756188:0:(llite_lib.c:3731:ll_prep_inode()) Skipped 6 previous similar messages Lustre: dir [0x280000be9:0x8f1:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 51 previous similar messages Lustre: dir [0x280000be6:0xbe8:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 3 previous similar messages Lustre: dir [0x2c0000be6:0xdde:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Lustre: dir [0x280000be9:0x1972:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 6 previous similar messages | Link to test |
racer test 2: racer rename: onyx-24vm4.onyx.whamcloud.com,onyx-24vm5 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 655209 Comm: ll_sa_655096 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.16.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 03 77 63 4e 5b c3 cc cc cc cc 48 89 df e8 25 22 af ff 39 05 13 a1 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffc061c6017e08 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100007 RDX: 0000000000100008 RSI: ffff9a75bdc6bf70 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff9a75a3f86200 R11: 0000000000000000 R12: ffff9a75a3f86200 R13: ffff9a75a3f86298 R14: ffff9a75bdc6bc90 R15: ffff9a75a3f862a8 FS: 0000000000000000(0000) GS:ffff9a75ffc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000006b810003 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core lustre(OE) mgc(OE) mdc(OE) lov(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) osc(OE) ksocklnd(OE) ptlrpc(OE) obdecho(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr joydev virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata virtio_net crc32c_intel serio_raw net_failover failover virtio_blk [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 596266:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x200000be9:0x506:0x0]: rc = -5 LustreError: 596266:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 1 previous similar message LustreError: 596266:0:(llite_lib.c:3731:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 596266:0:(llite_lib.c:3731:ll_prep_inode()) Skipped 1 previous similar message Lustre: dir [0x2c0000bea:0x6bc:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 3 previous similar messages | Link to test |
racer test 2: racer rename: trevis-47vm1.trevis.whamcloud.com,trevis-47vm2 DURATION=300 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 861218 Comm: ll_sa_861067 Kdump: loaded Tainted: G OE -------- - - 4.18.0-553.16.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 03 77 a3 64 5b c3 cc cc cc cc 48 89 df e8 25 22 af ff 39 05 13 a1 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffa211c44fbe08 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008010000e RDX: 000000008010000f RSI: ffff94863f52b970 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000001 R10: ffff9486313d1200 R11: 0000000000000000 R12: ffff9486313d1200 R13: ffff9486313d1298 R14: ffff94863f52b690 R15: ffff9486313d12a8 FS: 0000000000000000(0000) GS:ffff94867fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 00000000bd210006 CR4: 00000000000606e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl ib_core mgc(OE) lustre(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) libcfs(OE) rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel sunrpc pcspkr joydev virtio_balloon i2c_piix4 ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel serio_raw virtio_net virtio_blk net_failover failover CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=300 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 604881:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x240000be8:0xe8:0x0]: rc = -5 LustreError: 604881:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 56 previous similar messages LustreError: 604881:0:(llite_lib.c:3731:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 604881:0:(llite_lib.c:3731:ll_prep_inode()) Skipped 56 previous similar messages Lustre: dir [0x200000bed:0xb31:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 3 previous similar messages Lustre: dir [0x2c0000bea:0x587:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 1 previous similar message Lustre: dir [0x240000be6:0x2326:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 8 previous similar messages Autotest: Test running for 260 minutes (lustre-reviews_review-dne-part-9_109135.29) | Link to test |
racer test 2: racer rename: trevis-24vm4.trevis.whamcloud.com,trevis-24vm5 DURATION=900 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 747982 Comm: ll_sa_747826 Kdump: loaded Tainted: G W OE -------- - - 4.18.0-553.16.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 03 77 43 60 5b c3 cc cc cc cc 48 89 df e8 25 22 af ff 39 05 13 a1 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffac9b41c73e08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000000010000c RDX: 000000000010000d RSI: ffff97fc9fd13f70 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff97fc8310fc00 R11: 0000000000000000 R12: ffff97fc8310fc00 R13: ffff97fc8310fc98 R14: ffff97fc9fd13c90 R15: ffff97fc8310fca8 FS: 0000000000000000(0000) GS:ffff97fcbfd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000007f410005 CR4: 00000000000606e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl lustre(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) ib_core tcp_diag inet_diag loop rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr virtio_balloon joydev i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic virtio_net crc32c_intel ata_piix net_failover failover libata serio_raw virtio_blk [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x2c0006997:0x161e:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 38 previous similar messages Autotest: Test running for 1230 minutes (lustre-master-next_full-dne-zfs-part-2_839.35) Lustre: dir [0x2c0006997:0x4273:0x0] stripe 2 readdir failed: -2, directory is partially accessed! Lustre: Skipped 25 previous similar messages | Link to test |
racer test 2: racer rename: onyx-143vm7.onyx.whamcloud.com,onyx-143vm8 DURATION=900 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 768693 Comm: ll_sa_768585 Kdump: loaded Tainted: G W OE --------- - - 4.18.0-513.24.1.el8_9.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 73 78 84 50 5b c3 cc cc cc cc 48 89 df e8 05 d7 af ff 39 05 03 8c 5d 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffa96047a63e08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008010000e RDX: 000000008010000f RSI: ffff8bbd05fbfb70 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: ffffffffc10eb100 R10: ffff8bbe1cfa0000 R11: 0000000000000001 R12: ffff8bbe1cfa0000 R13: ffff8bbe1cfa0098 R14: ffff8bbd05fbf890 R15: ffff8bbe1cfa00a8 FS: 0000000000000000(0000) GS:ffff8bbe3bd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000022210004 CR4: 00000000003706e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x16c/0x1c0 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? ll_sai_put+0xa0/0x2f0 [lustre] ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl lustre(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) ib_core tcp_diag inet_diag loop rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel virtio_balloon i2c_piix4 pcspkr joydev sunrpc ext4 mbcache jbd2 ata_generic crc32c_intel ata_piix libata serio_raw virtio_net virtio_blk net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x240007932:0x20dc:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 120 previous similar messages | Link to test |
racer test 2: racer rename: onyx-75vm1.onyx.whamcloud.com,onyx-75vm2 DURATION=900 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 1376224 Comm: ll_sa_1376108 Kdump: loaded Tainted: G W OE --------- - - 4.18.0-477.27.1.el8_8.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 b3 b4 25 4a 5b e9 1d a2 42 00 48 89 df e8 b5 98 b0 ff 39 05 83 dc 5d 01 77 e3 5b e9 07 a2 42 00 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d e9 e5 RSP: 0018:ffff9953c5af7e08 EFLAGS: 00010206 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100006 RDX: 0000000000100007 RSI: ffff88e9e3555770 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000000 R09: ffffffffc0d8f400 R10: ffff88e9940aae00 R11: 0000000000000001 R12: ffff88e9940aae00 R13: ffff88e9940aae98 R14: ffff88e9e3555490 R15: ffff88e9940aaea8 FS: 0000000000000000(0000) GS:ffff88e9ffc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000012410001 CR4: 00000000001706f0 Call Trace: ll_statahead_thread+0x658/0x2120 [lustre] ? __raw_spin_unlock_irq+0x5/0x20 ? finish_task_switch+0x86/0x2e0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl lustre(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) ib_core tcp_diag inet_diag loop rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs intel_rapl_msr lockd grace fscache intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcspkr joydev virtio_balloon i2c_piix4 sunrpc ata_generic ext4 mbcache jbd2 ata_piix libata virtio_net crc32c_intel serio_raw virtio_blk net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x2c0006990:0x1f0d:0x0] stripe 0 readdir failed: -2, directory is partially accessed! Lustre: Skipped 10 previous similar messages Autotest: Test running for 1040 minutes (lustre-master-next_full-dne-zfs-part-2_839.11) Lustre: dir [0x2c0006998:0x4f3f:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 9 previous similar messages Autotest: Test running for 1045 minutes (lustre-master-next_full-dne-zfs-part-2_839.11) Autotest: Test running for 1050 minutes (lustre-master-next_full-dne-zfs-part-2_839.11) | Link to test |
racer test 2: racer rename: onyx-25vm1.onyx.whamcloud.com,onyx-25vm2 DURATION=900 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 534928 Comm: ll_sa_534809 Kdump: loaded Tainted: G W OE -------- - - 4.18.0-553.16.1.el8_10.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 03 77 03 4c 5b c3 cc cc cc cc 48 89 df e8 25 22 af ff 39 05 13 a1 5c 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb1ec88a47e08 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100008 RDX: 0000000000100009 RSI: ffff91fce02ddd70 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: 0000000000000000 R10: ffff91fc94651e00 R11: 0000000000000000 R12: ffff91fc94651e00 R13: ffff91fc94651e98 R14: ffff91fce02dda90 R15: ffff91fc94651ea8 FS: 0000000000000000(0000) GS:ffff91fcffc00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000057410006 CR4: 00000000000606f0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x157/0x180 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl lustre(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) ib_core tcp_diag inet_diag loop rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev pcspkr virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net serio_raw net_failover virtio_blk failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c LustreError: 272347:0:(lcommon_cl.c:195:cl_file_inode_init()) lustre: failed to initialize cl_object [0x20000716a:0x1ef:0x0]: rc = -5 LustreError: 272347:0:(lcommon_cl.c:195:cl_file_inode_init()) Skipped 106 previous similar messages LustreError: 272347:0:(llite_lib.c:3731:ll_prep_inode()) lustre: new_inode - fatal error: rc = -5 LustreError: 272347:0:(llite_lib.c:3731:ll_prep_inode()) Skipped 106 previous similar messages Autotest: Test running for 1080 minutes (lustre-master-next_full-dne-zfs-part-2_838.35) Lustre: dir [0x200007168:0x32fa:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 11 previous similar messages | Link to test |
racer test 2: racer rename: onyx-76vm1.onyx.whamcloud.com,onyx-76vm2 DURATION=900 | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 826445 Comm: ll_sa_826294 Kdump: loaded Tainted: G W OE --------- - - 4.18.0-513.24.1.el8_9.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 73 78 24 47 5b c3 cc cc cc cc 48 89 df e8 05 d7 af ff 39 05 03 8c 5d 01 77 e3 5b c3 cc cc cc cc 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d c3 cc RSP: 0018:ffffb711424bbe08 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000100007 RDX: 0000000000100008 RSI: ffff8b371291ad70 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000000 R09: ffffffffc1087400 R10: ffff8b370cc52c00 R11: 0000000000000001 R12: ffff8b370cc52c00 R13: ffff8b370cc52c98 R14: ffff8b371291aa90 R15: ffff8b370cc52ca8 FS: 0000000000000000(0000) GS:ffff8b377fd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 0000000060010003 CR4: 00000000001706e0 Call Trace: ? __die_body+0x1a/0x60 ? no_context+0x1ba/0x3f0 ? __bad_area_nosemaphore+0x16c/0x1c0 ? do_page_fault+0x37/0x12d ? page_fault+0x1e/0x30 ? ll_sai_put+0x120/0x2f0 [lustre] ? _atomic_dec_and_lock+0x2/0x50 ll_statahead_thread+0x658/0x2120 [lustre] ? __switch_to_asm+0x43/0x80 ? finish_task_switch+0x86/0x2f0 ? __schedule+0x2d9/0x870 ? ll_statahead_handle.constprop.30+0x170/0x170 [lustre] kthread+0x134/0x150 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsv3 nfsd nfs_acl lustre(OE) obdecho(OE) mgc(OE) mdc(OE) lov(OE) osc(OE) lmv(OE) fid(OE) fld(OE) ptlrpc_gss(OE) ptlrpc(OE) obdclass(OE) ksocklnd(OE) lnet(OE) libcfs(OE) ib_core tcp_diag inet_diag loop rpcsec_gss_krb5 auth_rpcgss intel_rapl_msr nfsv4 intel_rapl_common dns_resolver crct10dif_pclmul nfs lockd grace crc32_pclmul fscache ghash_clmulni_intel joydev pcspkr virtio_balloon i2c_piix4 sunrpc ext4 mbcache jbd2 ata_generic ata_piix libata crc32c_intel virtio_net serio_raw virtio_blk net_failover failover [last unloaded: libcfs] CR2: 0000000000000008 | Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: DEBUG MARKER: DURATION=900 MDSCOUNT=4 OSTCOUNT=8 RACER_MAX_MB=0 RACER_ENABLE_FLR=false RACER_ENABLE_DOM=false RACER_ENABLE_SEL=false RACER_ENABLE_MIGRATION=false RACER_MAX_CLEANUP_WAIT= RACER_EXTRA="" RACER_EXTRA_LAYOUT="" RACER_PROGS=dir_c Lustre: dir [0x2c0006996:0x3789:0x0] stripe 1 readdir failed: -2, directory is partially accessed! Lustre: Skipped 218 previous similar messages LustreError: lustre-MDT0000-mdc-ffff8b375dcfd800: operation ldlm_enqueue to node 10.240.26.106@tcp failed: rc = -107 Lustre: lustre-MDT0000-mdc-ffff8b375dcfd800: Connection to lustre-MDT0000 (at 10.240.26.106@tcp) was lost; in progress operations using this service will wait for recovery to complete LustreError: lustre-MDT0000-mdc-ffff8b375dcfd800: This client was evicted by lustre-MDT0000; in progress operations using this service will fail. LustreError: 448364:0:(file.c:5973:ll_inode_revalidate_fini()) lustre: revalidate FID [0x200007162:0x11:0x0] error: rc = -108 Lustre: lustre-MDT0000-mdc-ffff8b375dcfd800: Connection restored to 10.240.26.106@tcp (at 10.240.26.106@tcp) Autotest: Test running for 875 minutes (lustre-master-next_full-dne-zfs-part-2_838.23) | Link to test |
parallel-scale-nfsv3 test racer_on_nfs: racer on NFS client | BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 1 PID: 40515 Comm: ll_sa_21331 Kdump: loaded Tainted: G OE --------- - - 4.18.0-425.10.1.el8_lustre.x86_64 #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 RIP: 0010:_atomic_dec_and_lock+0x2/0x50 Code: 05 93 34 86 55 5b e9 3d a2 23 00 48 89 df e8 f5 f1 b0 ff 39 05 23 a6 3e 01 77 e3 5b e9 27 a2 23 00 90 90 90 90 90 90 90 55 53 <8b> 07 83 f8 01 74 12 8d 50 ff f0 0f b1 17 75 f2 31 c0 5b 5d e9 05 RSP: 0018:ffffb5fbc0bafe10 EFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 000000008010000e RDX: 000000008010000f RSI: ffff88be622ca770 RDI: 0000000000000008 RBP: 0000000000000008 R08: 0000000000000001 R09: ffffffffc11c3c00 R10: ffff88bec475a400 R11: 0000000000000001 R12: ffff88bec475a400 R13: ffff88bec475a4a8 R14: ffff88be622ca490 R15: ffff88bec475a498 FS: 0000000000000000(0000) GS:ffff88beffd00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000008 CR3: 000000003c810002 CR4: 00000000000606e0 Call Trace: ll_statahead_thread+0x66c/0x2090 [lustre] ? __raw_spin_unlock_irq+0x5/0x20 ? finish_task_switch+0xaf/0x2e0 ? __schedule+0x2d9/0x860 ? ll_statahead_handle.constprop.31+0x170/0x170 [lustre] kthread+0x10b/0x130 ? set_kthread_struct+0x50/0x50 ret_from_fork+0x35/0x40 Modules linked in: nfsd nfs_acl osp(OE) mdd(OE) lod(OE) mdt(OE) lfsck(OE) mgs(OE) mgc(OE) osd_ldiskfs(OE) lquota(OE) lustre(OE) lmv(OE) mdc(OE) lov(OE) osc(OE) fid(OE) fld(OE) ksocklnd(OE) ptlrpc(OE) obdclass(OE) lnet(OE) ldiskfs(OE) libcfs(OE) dm_flakey rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul ghash_clmulni_intel joydev virtio_balloon pcspkr i2c_piix4 sunrpc dm_mod ext4 mbcache jbd2 ata_generic ata_piix libata virtio_net crc32c_intel serio_raw net_failover failover virtio_blk CR2: 0000000000000008 | Link to test |