| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| |
|
|
|
| |
After this change it fits more naturally into the Go's error
functionality.
|
| |
|
|
|
|
| |
This is a faster way to find all coverage points.
Signed-off-by: Alexander Egorenkov <eaibmz@gmail.com>
|
| |
|
|
|
|
| |
objdump prints absolute addresses for coverage points of core kernel.
Signed-off-by: Alexander Egorenkov <eaibmz@gmail.com>
|
| |
|
|
|
|
|
| |
The 'attrName' is often an absolute path for out-of-tree modules.
This commit avoids redundant path concatenation when 'attrName'
is already absolute, enabling developers to view coverage correctly
in the web UI.
|
| |
|
|
| |
The lines about folders don't look actionable.
|
| |
|
|
|
|
|
|
|
|
| |
Rust compilation units are different from C in that a single compilation
unit includes multiple source files, but we still need to tell which PC
range belong to which source file.
Infer that information from the LineEntry structures.
Cc #6000.
|
| |
|
|
| |
We periodically send coverage reports for the regressions detection.
|
| |
|
|
| |
The last filtering step is the empty dirs removal.
|
| |
|
|
|
|
| |
Pre tag is used to save formatting space.
Pre uses monospace font, thus changed all the file-tree to monospace.
Pre also adds margin. Forcing margin to 0 manually.
|
| |
|
|
| |
Tree view now shows the total drop for every item.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
cover.Format controls the resulting view.
It allows to:
1. Remove records with 0 covered blocks.
2. Remove lines with low (user defined) coverage.
3. Order records by the covered lines drop value.
The corresponding GET parameters are:
1. Implicitly enabled for onlyUnique records.
2. min-cover-lines-drop=%d
3. order-by-cover-lines-drop=1
|
| |
|
|
|
|
| |
Add coverage percent for kernel interfaces.
The current data is generated with Mar coverage report
on kernel commit 1e7857b28020ba57ca7fdafae7ac855ba326c697.
|
| |
|
|
|
|
|
|
| |
Currently it's only possible to understand total number of uncovered
blocks in a function (implicitly defined by Instrumented field).
This does not allow to render coverage data, nor do detailed analysis
on line level. Export detailed info about both covered and uncovered blocks.
This allows to e.g. calculate coverage percent for kernel interfaces.
|
| |
|
|
| |
Humans read top-down rather than zigzag.
|
| | |
|
| |
|
|
|
|
|
| |
Quarter long aggregation means thousands of gzip files.
Opening all the files in parallel we struggle from:
1. Memory overhead.
2. GCS API errors. It can't read Attrs for 1500+ files.
|
| |
|
|
| |
New code will be limited to max 7 function params.
|
| |
|
|
| |
It allows to reduce parameters count for some functions.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some sub paths may not be covered due to hardware configuration, or lack
of interest. This patch allows them to be excluded from the stats. This
can be convenient if the excluded paths are deep in the hierarchy:
{
"name": "sound",
"path": [
"techpack/audio",
"-techpack/audio/asoc/aaa/bbb"
"-techpack/audio/asoc/aaa/ccc"
]
}
|
| | |
|
| |
|
|
| |
To simplify interface Read*Symbols were moved out from symbolizer.Symbolizer.
|
| |
|
|
| |
This function reached the cyclo complexity limit 24
|
| | |
|
| |
|
|
|
| |
The export is quite big but is generated fast.
Every line is a valid json object representing the single program coverage.
|
| | |
|
| |
|
|
| |
There is no need to init arch every loop iteration.
|
| | |
|
| |
|
|
| |
Typo fix introduced the var name shadowing thus it is easier to remove the intermediate var.
|
| | |
|
| |
|
|
| |
Reads from this map return Progs, not PCs.
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
It will simplify the json API links.
|
| |
|
|
|
|
|
|
| |
filepath.Walk calls os.Lstat for every file or directory to retrieve os.FileInfo.
filepath.WalkDir avoids unnecessary system calls since it provides a fs.DirEntry,
which includes file type information without requiring a stat call.
This improves performance by reducing redundant system calls.
|
| | |
|
| |
|
|
|
|
|
|
| |
1. Init coveragedb client once and propagate it through context to enable mocking.
2. Always init coverage handlers. It simplifies testing.
3. Read webGit and coveragedb client from ctx to make it mockable.
4. Use int for file line number and int64 for merged coverage.
5. Add tests.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
1. Make heatmap testable, move out the spanner client instantiation.
2. Generate spannerdb.ReadOnlyTransaction mocks.
3. Generate spannerdb.RowIterator mocks.
4. Generate spannerdb.Row mocks.
5. Prepare spannerdb fixture.
6. Fixed html control name + value.
7. Added multiple tests.
8. Show line coverage from selected manager.
9. Propagate coverage url params to file coverage url.
|
| |
|
|
| |
They are shorter, more readable, and don't require temp vars.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
It allows to control known parameters:
1. Period (months or days).
2. Target subsystem.
3. Target manager.
And adds the disabled "Only unique" checkbox.
|
| | |
|
| |
|
|
|
|
|
|
| |
The problem is the deadlock happening on GCS storage error.
GCS client establishes the connection when it has enough data to write.
It is approximately 16M. The error happens on io.Writer access in the middle of the merge.
This error terminates errgroup goroutine. Other goroutines remain blocked.
This commit propagates termination signal to other goroutines.
|
| |
|
|
|
|
|
|
| |
1. Make interface testable.
2. Add Spanner interfaces.
3. Generate mocks for proxy interfaces.
4. Test SaveMergeResult.
5. Test MergeCSVWriteJSONL and coveragedb.SaveMergeResult integration.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previous implementation store only the summary of processed records.
The summary was <1GB and single processing node was able to manipulate the data.
Current implementation stores all the details about records read to make post-processing more flexible.
This change was needed to get access to the source manager name and will help to analyze other details.
This new implementation requires 20GB mem to process single day records.
CSV log interning experiment allowed to merge using 10G.
Quarter data aggregation will cost ~100 times more.
The alternative is to use stream processing. We can process data kernel-file-by-file.
It allows to /15000 memory consumption.
This approach is implemented here.
We're batching coverage signals by file and store per-file results in GCS JSONL file.
See https://jsonlines.org/ to learn about jsonl.
|
| |
|
|
|
| |
Storing all the details about coverage data source we're able to better explain the origin.
This origin data is currently used to get "manager" name.
|
| |
|
|
| |
It enables us to see the manager unique coverage.
|