diff options
| author | Dmitry Vyukov <dvyukov@google.com> | 2024-11-20 08:43:01 +0100 |
|---|---|---|
| committer | Dmitry Vyukov <dvyukov@google.com> | 2024-11-20 11:33:58 +0000 |
| commit | 4fca1650892b7aba6ac219ce521543d411cf96ac (patch) | |
| tree | 668b39c6f72c509a44dabc13c0fcdcac386a1f81 /executor/executor.cc | |
| parent | f56b4dcc82d7af38bf94d643c5750cf49a91a297 (diff) | |
executor: increase coverage buffer size
The coverage buffer frequently overflows.
We cannot increase it radically b/c they consume lots of memory
(num procs x num kcovs x buffer size) and lead to OOM kills
(at least with 8 procs and 2GB KASAN VM).
So increase it 2x and slightly reduce number of threads/kcov descriptors.
However, in snapshot mode we can be more aggressive (only 1 proc).
This reduces number of overflows by ~~2-4x depending on syscall.
Diffstat (limited to 'executor/executor.cc')
| -rw-r--r-- | executor/executor.cc | 13 |
1 files changed, 6 insertions, 7 deletions
diff --git a/executor/executor.cc b/executor/executor.cc index 603f23f73..750858be2 100644 --- a/executor/executor.cc +++ b/executor/executor.cc @@ -72,13 +72,13 @@ const int kOutPipeFd = kMaxFd - 2; // remapped from stdout const int kCoverFd = kOutPipeFd - kMaxThreads; const int kExtraCoverFd = kCoverFd - 1; const int kMaxArgs = 9; -const int kCoverSize = 256 << 10; +const int kCoverSize = 512 << 10; const int kFailStatus = 67; // Two approaches of dealing with kcov memory. -const int kCoverOptimizedCount = 12; // the number of kcov instances to be opened inside main() +const int kCoverOptimizedCount = 8; // the max number of kcov instances const int kCoverOptimizedPreMmap = 3; // this many will be mmapped inside main(), others - when needed. -const int kCoverDefaultCount = 6; // otherwise we only init kcov instances inside main() +const int kCoverDefaultCount = 6; // the max number of kcov instances when delayed kcov mmap is not available // Logical error (e.g. invalid input program), use as an assert() alternative. // If such error happens 10+ times in a row, it will be detected as a bug by the runner process. @@ -1129,6 +1129,8 @@ uint32 write_signal(flatbuffers::FlatBufferBuilder& fbb, int index, cover_t* cov // Currently it is code edges computed as xor of two subsequent basic block PCs. fbb.StartVector(0, sizeof(uint64)); cover_data_t* cover_data = (cover_data_t*)(cov->data + cov->data_offset); + if ((char*)(cover_data + cov->size) > cov->data_end) + failmsg("too much cover", "cov=%u", cov->size); uint32 nsig = 0; cover_data_t prev_pc = 0; bool prev_filter = true; @@ -1468,11 +1470,8 @@ void execute_call(thread_t* th) // Reset the flag before the first possible fail(). th->soft_fail_state = false; - if (flag_coverage) { + if (flag_coverage) cover_collect(&th->cov); - if (th->cov.size >= kCoverSize) - failmsg("too much cover", "thr=%d, cov=%u", th->id, th->cov.size); - } th->fault_injected = false; if (th->call_props.fail_nth > 0) |
