| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
| |
Current logic updates information about the already knows files.
Additionally we want to see subsystem information about all the created/added files.
|
| | |
|
| |
|
|
| |
Any is the preferred over interface{} now in Go.
|
| |
|
|
|
|
| |
#6070 explains the problem of data propagation.
1. Add weekly /cron/update_coverdb_subsystems.
2. Stop updating subsystems from coverage receiver API.
|
| | |
|
| | |
|
| |
|
|
|
| |
1. Refactor handleHeatmap.
2. Introduce function options. Build them from http.Request.
|
| | |
|
| | |
|
| |
|
|
| |
Happy path testing rely on iter.Stop() call to be done before we close errCh.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
It fixes the misalignment with the code.
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
Session aggregates information about multiple builds.
The source code may have a few function definitions per file.
In case of multiple function definitions we want to see sum(func1 + func2).
Instead of [line_from, line_to] it is better to store the specific list of lines signaled func coverage.
|
| | |
|
| |
|
|
|
|
|
|
| |
1. Init coveragedb client once and propagate it through context to enable mocking.
2. Always init coverage handlers. It simplifies testing.
3. Read webGit and coveragedb client from ctx to make it mockable.
4. Use int for file line number and int64 for merged coverage.
5. Add tests.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
1. Make heatmap testable, move out the spanner client instantiation.
2. Generate spannerdb.ReadOnlyTransaction mocks.
3. Generate spannerdb.RowIterator mocks.
4. Generate spannerdb.Row mocks.
5. Prepare spannerdb fixture.
6. Fixed html control name + value.
7. Added multiple tests.
8. Show line coverage from selected manager.
9. Propagate coverage url params to file coverage url.
|
| |
|
|
|
| |
Current schema makes session+filepath a primary key (it is unique).
Manager as a part of primary key makes session+filepath+manager a unique combination.
|
| |
|
|
|
|
|
| |
Current tests check the amount of data transfered to the mock.
But they didn't check the data correctness.
Because of this bug every batch have the same coverage which is nonsense.
|
| |
|
|
|
|
|
|
| |
1. Make interface testable.
2. Add Spanner interfaces.
3. Generate mocks for proxy interfaces.
4. Test SaveMergeResult.
5. Test MergeCSVWriteJSONL and coveragedb.SaveMergeResult integration.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previous implementation store only the summary of processed records.
The summary was <1GB and single processing node was able to manipulate the data.
Current implementation stores all the details about records read to make post-processing more flexible.
This change was needed to get access to the source manager name and will help to analyze other details.
This new implementation requires 20GB mem to process single day records.
CSV log interning experiment allowed to merge using 10G.
Quarter data aggregation will cost ~100 times more.
The alternative is to use stream processing. We can process data kernel-file-by-file.
It allows to /15000 memory consumption.
This approach is implemented here.
We're batching coverage signals by file and store per-file results in GCS JSONL file.
See https://jsonlines.org/ to learn about jsonl.
|
| |
|
|
|
|
|
| |
I don't see any visible problems but the records in DB are not created.
Let's report the amount of records created at the end of the batch step.
+log the names of the managers
|
| |
|
|
| |
It enables us to see the manager unique coverage.
|
| | |
|
| |
|
|
|
|
| |
We currently merge bigquery data for every line coverage request.
Let's read cached lines coverage data from spanner instead.
It allows to get only 1 file version from git and skip the data merge step.
|
| |
|
|
|
| |
Instrumented lines + hit count gives more information than instrumented + covered lines.
Expected storage cost is at the same level.
|
| |
|
|
|
|
| |
It stores instrumented and covered lines numbers in the same table
where we store file coverage numbers.
These line numbers will be used to speed up the file coverage rendering.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
It directly uses the coverage signals from BigQuery.
There is no need to wait for the coverage_batch cron jobs.
Looks good for debugging.
Limitations:
1. It is slow. I know how to speed up but want to stabilize the UI first.
2. It is expensive because of the direct BQ requests. Limited to admin only because of it.
3. It merges only the commits reachable on github because of the gitweb throttling.
After the UI stabilization I'll save all the required artifacts to spanner and make this page publicly available.
To merge all the commits, not the github reachable only, http git caching instance is needed.
|
| |
|
|
|
|
|
|
|
| |
Current coverage is a week long data merge for every day.
The goal is to show a monthly data by default and
make daily data available for &period=day requests.
This commit makes the first two steps:
1. Make the day a day long, not a week long aggregation.
2. Enable the month long merges available for &period=month.
|
| |
|
|
|
| |
Total coverage page defaults to quarters (?period=quarter).
Monthly data is available with ?period=month.
|
| |
|