Compare commits

...

547 Commits

Author SHA1 Message Date
gaasedelen 0350b0c538 Follow-up to #157: unsubscribe torn-down coverage overview refresh callback 2026-02-13 19:39:44 -05:00
gaasedelen 88051b0652 Follow-up to #157: keep IDA8/9.1 sip path and bump dev version to 0.9.4 2026-02-13 18:56:11 -05:00
gaasedelen 1d7e4b321c Merge PR #157: IDA 9.2/9.3 compatibility fixes 2026-02-13 18:55:54 -05:00
raptor 602bb611a4 Fix some issues in IDA 9.1 2025-09-09 14:04:28 +02:00
raptor c5cc03d7dc Improve IDA 9.2 support without breaking previous supported versions 2025-09-09 13:34:47 +02:00
raptor 720b036a9d Start working on IDA 9.2 port 2025-09-09 11:43:22 +02:00
raptor b8ff268c86 IDA 9.x SDK fix 2025-08-28 17:47:43 +02:00
gaasedelen 88ceac7b19 tweak to make lighthouse copy *all* in-box theme files to the user theme directory... 2024-02-06 13:59:33 -05:00
gaasedelen 9a56463103 make update dialog look a little less janky 2024-02-06 13:30:54 -05:00
gaasedelen 562595be9b Merge branch 'develop' 2024-02-05 13:05:19 -05:00
gaasedelen eeb16a5f81 final fixes, ticking version numbers 2024-02-05 13:03:41 -05:00
gaasedelen 84eeb210c6 minor cleanup, finalizing for release 2024-02-02 19:28:55 -05:00
gaasedelen f944b62cc3 Update Intel pin build scripts for pin-3.30-98830 (MSVC) 2024-02-02 19:06:33 -05:00
gaasedelen 1b883377d6 switch database metadata instruction listing to a set, dramatically improving coverage loading perf in several cases, fixing #128 2024-02-02 17:31:56 -05:00
gaasedelen 146eb4fd2d fix bug where 'X' column in combobox was super wide/fat under PySide6 for some reason 2024-02-02 17:03:08 -05:00
gaasedelen de2704b680 Coverage Xref right click UI action rewrite for binja 2024-02-02 16:38:04 -05:00
gaasedelen 96df2c5a1f adds "long night" theme by ioncodes 2024-02-02 12:19:31 -05:00
gaasedelen 9c579c6e9d fix #131, hopefully improves dpi issues cross platform 2024-02-02 11:55:05 -05:00
gaasedelen ee7ac30f14 fix binary ninja "suspicious" warnings 2024-02-02 11:32:04 -05:00
gaasedelen 5f8fa02de2 fix function/symbol rename event hooks in binja 2024-02-02 11:32:04 -05:00
Thomas Dupuy 87defff5cd FIx small typo. 2024-02-01 20:31:24 -05:00
gaasedelen e1438159da fixes coverage xref compatability issues (timestamp crash) with binja / PySide6 per #123 2024-02-01 20:25:08 -05:00
gaasedelen 856768d129 fix clipboard compat issue exposed by PySide6 2024-02-01 20:08:26 -05:00
gaasedelen 7d67e41773 Fixes issue raised in #129 while retaining compatability and functionality across IDA / binja. 2024-02-01 19:54:30 -05:00
gaasedelen 8f3c23bbb6 maintain compatability with IDA 2024-02-01 18:50:58 -05:00
Alexandre Brenner 4992d4b731 Fix ValueError
Only tested on Windows
2024-02-01 18:50:58 -05:00
Alexandre Brenner 550c476e81 Fix TypeError
`TypeError: BasicBlock._create_instance() takes 2 positional arguments but 3 were given`
Only tested on Windows
2024-02-01 18:50:58 -05:00
mishap mishap bba8d91253 Allow partial module whitelist match 2024-01-18 20:49:25 -05:00
gaasedelen 0e0e08e564 normalize code/comments of drcov 3 fix 2024-01-18 19:47:48 -05:00
PavelBlinnikov 6912018b89 add support for drcov version 3 2024-01-18 19:47:48 -05:00
Moritz Schloegel f4642e8b4b fix typo in gradient 2022-02-17 14:29:27 -05:00
Moritz Schloegel 6e1dcb8d40 convert float to int as expected by Qt API functions (fixes #116) 2022-02-17 14:29:27 -05:00
Alexander Taylor 973b40e6d0 Fix for func possibly being None. 2022-01-10 11:36:32 -05:00
gaasedelen e3d636a3f4 adds support for tracking, viewing, and dumping orphan coverage (#108) 2021-10-05 22:48:37 -04:00
gaasedelen 7245a2d2c4 Merge branch 'develop' 2021-08-10 16:41:04 -04:00
gaasedelen 701952d83f add x86 supprt to tenet reader 2021-08-10 16:39:05 -04:00
gaasedelen 9ed0ccf528 updated readme, version number 2021-08-10 16:35:18 -04:00
gaasedelen ae1072a04b improves background color detection logic 2021-08-09 16:43:18 -04:00
gaasedelen 4c19fee630 bugfix when loading executables with unicode names on Python 2.7 2021-08-09 16:42:47 -04:00
gaasedelen d02fbf13d9 adds support for loading text-based Tenet traces 2021-08-06 18:45:51 -04:00
_yrp 19eddd6470 Lower block trace confidence (#105)
Depending on how blocks are traced, on some targets 0.9 is too high. Specifically, some emulator based tracers might consider the first instruction after a return to be a new "block" -- this will inflate the number of instructions with coverage path the threshold, even though the trace should still be considered block cov.
2021-08-06 18:41:41 -04:00
gaasedelen 09a95061ed reduce banner print 2021-08-03 16:47:34 -04:00
gaasedelen 94759790de font tweaks again 2021-08-03 16:46:56 -04:00
gaasedelen 374ca999f8 maybe fixes qt conflict issues? (issue #100) 2021-08-03 12:31:10 -04:00
gaasedelen b3c8683110 font fixes for macos...? for the 10th time...? 2021-07-30 09:03:40 -04:00
gaasedelen aa71810bf7 PySide6 & Python2 Compat :-X 2021-07-30 09:03:11 -04:00
gaasedelen b0f798a3d9 a bit of cleanup for binja plugin manager stuff 2021-07-29 10:42:45 -04:00
gaasedelen 85b08a9611 spindown lighthouse contexts in binja 2021-07-29 10:33:00 -04:00
gaasedelen f2031f921f a bit of housekeeping on drcov.py 2021-07-29 06:08:14 -04:00
Jordan 8e98a74eee Qt6 Updates for Binja + Binja plugin manager compatability (#106)
* initial pyside6 support and testing binary ninja plugin installation

* small tweak to fix plugin working with internal test builds
2021-07-29 05:47:36 -04:00
gaasedelen 85818f5b9b Merge pull request #98 from angelystor/master
Added DrCov module table version 5 support
2021-04-09 19:41:08 -04:00
Angelystor 8434c80e18 Added DrCov module table version 5 support which the latest DynamoRIO outputs 2021-04-08 18:55:09 +08:00
DrWhax a25e77764a Fixes for fixing the PIN build for all operating systems (#94)
* remove unneeded std namespace

* remove Wno-aligned-new

* This makes the build work again, tested on VC2015 community edition on
Windows Server 2019.

Credit for these changes go to: https://github.com/HongThatCong/Lighthouse-CodeCoverage-build
2020-12-26 21:57:41 -05:00
gaasedelen 91e427e7ec Merge branch 'develop' 2020-09-02 14:57:02 -04:00
gaasedelen 1bad74fc50 updates version string, readme tweaks 2020-09-02 14:55:44 -04:00
gaasedelen 68325095ee added the ability to suppress the module selector dialog 2020-08-31 21:58:02 -04:00
gaasedelen a50298a9d9 fix crash when ida is not shipped with ida_hexrays (issue #88) 2020-08-31 21:55:50 -04:00
gaasedelen 18ea9bbc69 disable logging by default, as things seem stable... 2020-07-28 22:30:08 -04:00
gaasedelen 5818ffe3fc trim filepaths off a coverage file's 'module names' (if present) 2020-07-28 22:29:40 -04:00
gaasedelen 5258d583c7 fix regression introduced in the shared note / painting fix from a commit or two prior 2020-07-28 22:09:52 -04:00
gaasedelen 5480afe646 renamed plugin dir to plugins for easier dev 2020-07-28 19:45:58 -04:00
gaasedelen 7a1abaeaa1 fixes a bug where the wrong nodes could get painted in IDA if a 'shared' node was executed 2020-07-04 22:12:55 -04:00
gaasedelen 1cf63ac925 Update metadata.py
save confirmed edges after computing cyclomatic complexity
2020-05-12 01:25:27 -04:00
gaasedelen 2bb4b9ebf3 vastly improves Lighthouse pintool performance for larger workloads 2020-05-04 05:35:02 -04:00
gaasedelen 710b13f38e Merge branch 'develop' 2020-04-23 06:00:37 -04:00
gaasedelen 69a595a875 updates readme and version # 2020-04-23 05:55:40 -04:00
gaasedelen 4661517cb3 hack to hide binja featuremap by default, since it will probably collide with the coverage overview dock position :-X 2020-04-23 05:03:09 -04:00
gaasedelen 3f0cfa856e bugfix where paint wouldn't fully refresh after changing themes 2020-04-23 04:20:38 -04:00
gaasedelen 8b9382d414 document the CoverageFile format a bit...... 2020-04-22 01:17:29 -04:00
gaasedelen a0c77f0f72 lower the font size a bit for macos 2020-04-21 23:13:44 -04:00
gaasedelen 22d48fa52b more robust path creation 2020-04-20 19:42:24 -04:00
gaasedelen 8cbfffe2e0 actually detect when IDA is is batch..... 2020-04-20 19:42:06 -04:00
gaasedelen 1df982ca4c ensure lighthouse stays disabled while using disassemblers headlessly 2020-04-20 18:05:16 -04:00
gaasedelen 859b994bf7 improve styling cross-platform/DPI 2020-04-20 15:42:08 -04:00
gaasedelen b610b1ee32 fixes bug where combobox would immediately close on binja linux 2020-04-20 04:56:26 -04:00
gaasedelen 33ef4e47a2 fix bug where drcov parser could fail to get correct filename from a crossplatform path... 2020-04-20 04:55:31 -04:00
gaasedelen 4eaca66caa fix bug where refresh could run twice when opening cov overview 2020-04-20 03:17:42 -04:00
gaasedelen 4e8f5d3bbc improve table header styling, dialogs 2020-04-20 03:17:03 -04:00
gaasedelen a943580b2c improved warning dialogs a bit... 2020-04-19 20:52:43 -04:00
gaasedelen 112e5d6a3b fix partial painting in binja... 2020-04-19 18:00:32 -04:00
gaasedelen 75572aed33 fix so the 'prefix' delim does not show in the table / matches what IDA renders... 2020-04-19 18:00:15 -04:00
gaasedelen 7c1573bfd6 cleanup of TODO's 2020-04-19 02:33:56 -04:00
gaasedelen feb83fc5d5 re-enable headless access via lighthouse.get_context(...) 2020-04-18 23:13:23 -04:00
gaasedelen 3f33c3cb45 enable live rebasing (at least... for IDA) 2020-04-18 22:15:57 -04:00
gaasedelen 9c0ecbc81e a few fixes and tweaks for robustness, performance 2020-04-18 20:21:50 -04:00
gaasedelen a0367a85da refresh hexrays views automatically 2020-04-18 03:29:09 -04:00
gaasedelen 4708422c6a improve the stability of the IDA painter 2020-04-18 02:19:42 -04:00
gaasedelen a789220b56 improve metadata collection 2020-04-17 23:46:34 -04:00
gaasedelen bc77c0ece2 wrap force clearing with a waitbox... 2020-04-17 23:01:27 -04:00
gaasedelen 4d94680b94 better resource cleanup when unloading (helps IDA close faster...) 2020-04-17 14:01:04 -04:00
gaasedelen b2c6695042 improve double click jump precision to go to first block with coverage in function 2020-04-17 02:03:03 -04:00
gaasedelen c2ceb47b84 totally ovehaul painting to support notion of 'streaming' 2020-04-17 01:38:40 -04:00
gaasedelen 0d52ef5068 misc fixes / tweaks 2020-04-17 01:37:46 -04:00
gaasedelen 61b8fb7668 better handling of subsystem lifetimes 2020-04-17 01:37:21 -04:00
gaasedelen 68f8c884f9 make theme util function more accessible 2020-04-16 23:46:17 -04:00
gaasedelen a0b375064a improve painting perf 2020-04-15 21:49:03 -04:00
gaasedelen c5dbfb1062 more verbose painter output 2020-04-13 21:11:53 -04:00
gaasedelen ea8cd61122 improve robustness of update check 2020-04-13 20:52:35 -04:00
gaasedelen 4aa116580e minor bugfixs / QOL 2020-04-13 10:04:53 -04:00
gaasedelen 6e6084a058 adds minimal update check 2020-04-13 10:04:25 -04:00
gaasedelen 481c624b7e improve theme reactivity for binja 2020-04-13 08:59:02 -04:00
gaasedelen a3183f21f2 improve theme loading / hinting for IDA 2020-04-12 19:53:58 -04:00
gaasedelen 33b3d4e0b6 Merge branch 'fbl_binja_refactor' into develop 2020-04-12 10:40:18 -04:00
gaasedelen 7fd1e46e53 fixes instructions in partially painted nodes not getting unpainted in binja... 2020-04-12 10:35:53 -04:00
gaasedelen 4ab65c985b cleanup, fixes right click regression with IDA 2020-04-12 09:03:43 -04:00
gaasedelen 28b0ecd49c automatically build the metadata cache when the coverage overview is first shown (if the cache is not already built) 2020-04-12 08:48:06 -04:00
gaasedelen c4cf78c1dd reconcile the binja refactor with IDA 2020-04-12 04:19:04 -04:00
gaasedelen e5b29f97b7 misc cleanup of binja refactor branch & TODO's 2020-04-11 00:00:17 -04:00
gaasedelen 36a37935dc reduce debug prints in binja, manage widget view state per tab 2020-04-10 23:25:55 -04:00
gaasedelen 65fd677758 prioritize navigating to a function start, if a function start block is shared 2020-04-10 21:57:52 -04:00
gaasedelen aa4936e269 enable priority painting for binja 2020-04-10 17:42:40 -04:00
gaasedelen 00f82a2181 only paint instructions in partially executed binja nodes 2020-04-10 16:54:20 -04:00
gaasedelen 1f7e525aeb made 'jump' case insensitive for 'sub_...' funcs in binja 2020-04-10 08:12:49 -04:00
gaasedelen 1a13b23345 fixes bug where one could not 'jump' to a renamed function 2020-04-10 08:11:52 -04:00
gaasedelen 52fb3e70d0 emancipate nodes from functions, this allows coverage to properly be computed for functions that share nodes 2020-04-10 07:46:58 -04:00
gaasedelen ff2c0d9619 switch to pprofile for line profiling 2020-04-10 05:27:08 -04:00
gaasedelen ebea88465f binja-specific perf wins 2020-04-10 03:05:37 -04:00
gaasedelen 67e5caf62d mutual perf wins for metadata caching 2020-04-10 03:03:53 -04:00
gaasedelen a6eeafe180 overhaul for proper binja integration 2020-04-08 11:42:32 -04:00
gaasedelen 8c4e29fe50 bugfix: automatically evaluate the shell expression when switching to the hot shell 2020-04-08 01:43:13 -04:00
gaasedelen 9eca228925 fixes bug that could cause one to be prompted multiple times for a composition name 2020-04-08 01:06:22 -04:00
gaasedelen 553eeb7d23 fixes grammar regression... 2020-04-08 00:43:06 -04:00
gaasedelen 914b731676 fixes bug where combobox could get stuck closed 2020-04-04 23:17:06 -04:00
gaasedelen 6571b0735f aggregate symbol was unusable in compositions 2020-04-04 19:13:34 -04:00
gaasedelen 83e3b423b9 removes range syntax from grammar, buggy & nobody uses it... 2020-04-04 19:10:58 -04:00
gaasedelen 2ed77f305d improve xref dialog for high dpi 2020-04-04 17:44:48 -04:00
gaasedelen 25ff8ed245 fix theme auto color selection bugginess 2020-04-04 16:24:27 -04:00
gaasedelen 3054246a8d bugfix for theme auto paint color selection 2020-04-04 08:30:36 -04:00
gaasedelen 5670e3116c improve xref dialog 2020-04-04 03:19:49 -04:00
gaasedelen 8a2c011636 combobox style tweaks 2020-04-04 02:37:14 -04:00
gaasedelen 6ad0af71cf fix bug when unable to find mappable intructions in cov data 2020-04-04 02:11:59 -04:00
gaasedelen 181b13d0f7 more metadata cache cleanup, robustness, QUALITY SOFTWARE 2020-04-03 21:26:22 -04:00
gaasedelen 5433cdc8e4 streamline metadata collection, allow transition from async to synchronous 2020-04-03 07:18:54 -04:00
gaasedelen 416a46b8ab small QoL tweaks 2020-04-03 07:16:00 -04:00
gaasedelen db1f4ebc68 make coverage xref menu action only appear if there is coverage loaded 2020-04-03 00:19:05 -04:00
gaasedelen c44f35e5f4 add a dark / light coverage paint variant to the themes 2020-04-02 23:55:42 -04:00
gaasedelen 7d7ee5b9f0 improve click + drag text selection in composing shell 2020-04-02 21:38:12 -04:00
gaasedelen 4d36be57c8 add a menu option to dump unmappable coverage data to the console... 2020-04-02 21:06:51 -04:00
gaasedelen 98745a09c4 a few more readme updates 2020-04-02 20:34:27 -04:00
gaasedelen 6dcb3c769c Update README.md
at least some basic updates to the readme so as to not confuse anyone trying to use the dev branch
2020-04-02 20:10:03 -04:00
gaasedelen 11d5f9e62f enforce stricter logic around fuzzy name matching #63 2020-04-02 19:19:52 -04:00
gaasedelen cad8679170 tweaks & typos, i'm tired 2020-04-02 08:13:04 -04:00
gaasedelen 410adc45a5 improve coverage xref styling 2020-04-02 08:03:59 -04:00
gaasedelen 02ea88e3d7 adds fallback selector dialog if the database's loaded module (binary) cannot be found in a coverage file. closes #63 2020-04-02 08:03:35 -04:00
gaasedelen e89a36b9d6 minor binja tweaks, to keep dev working with it for now... 2020-04-02 06:00:34 -04:00
gaasedelen f6902baf38 update drcov parser to account for multi-segment modules 2020-04-02 04:42:57 -04:00
gaasedelen 79c90db5b0 fixes bug where a leftover / mostly deleted coverage overview could get left hanging around 2020-04-02 00:39:30 -04:00
gaasedelen 93228c880b Merge branch 'fbl_themes' into develop 2020-04-02 00:03:42 -04:00
gaasedelen 4b63a0f857 Rename dark theme, again. crosses off themes from readme 2020-04-02 00:02:25 -04:00
gaasedelen fb65c06b1c some cleanup 2020-04-01 23:55:09 -04:00
gaasedelen 2af8854673 add a bit of theme validation ... 2020-04-01 23:54:57 -04:00
gaasedelen 9b85603828 rename 'classic' theme to 'synthwave' 2020-04-01 23:19:54 -04:00
gaasedelen f7ade4eaa3 enable theme change button 2020-04-01 23:18:59 -04:00
gaasedelen be7815ff06 remove hardcoded colors from parts of HTML report 2020-04-01 23:08:08 -04:00
gaasedelen a52b8d5fda split out all theme-dependent code into refreshable functions 2020-04-01 09:15:45 -04:00
gaasedelen 5ae17c85c8 add theme change callback to the palette 2020-04-01 07:32:18 -04:00
gaasedelen 2a2b646f29 minor cleanup / refactoring of palette 2020-03-31 22:43:27 -04:00
gaasedelen 9946863e41 populate user theme dir, auto-select best theme, save/load theme preference from disk, 2020-03-31 19:18:25 -04:00
gaasedelen 2d7d0d598b adds light theme 2020-03-31 02:31:16 -04:00
gaasedelen 72ec6cccf4 updates, more tweakable fields 2020-03-31 02:30:24 -04:00
gaasedelen d53e5032d0 initial theme/palette refactoring 2020-03-31 00:02:37 -04:00
gaasedelen da5942466a update dates and version number, since there won't be an v0.8.4 release... 2020-03-29 17:34:21 -04:00
gaasedelen 0ef5c9d9e1 expose the active CoverageDirector object instance for the disassembler interpreter, or other scripts to use 2020-03-29 17:31:07 -04:00
Jan Beck b1488c3086 Add compatibility with PIN 3.11 (#77)
* Add compatibility with PIN 3.11

Co-authored-by: gaasedelen <gaasedelen@users.noreply.github.com>
2020-03-29 05:36:04 -04:00
lucasg b8a996b5f1 Update README.md to provide more details on finding IDA plugin dir w/ API (#72) 2020-03-29 05:13:27 -04:00
Alexandre Maloteaux e5b9f34193 fix for ida 7.4 with python3.7 (#79)
* fix for ida 7.4 with python3.7

* fix html export too

* fix painter

* tweakd to use our own dict shims

* a few more minor fixes

Co-authored-by: gaasedelen <markus.gaasedelen@gmail.com>
2020-03-29 04:59:29 -04:00
gaasedelen 257d69594e fully deprecates IDA 6.X 2020-03-29 03:43:04 -04:00
gaasedelen 28ea6b8821 make drcov.py work natively outside of lighthouse 2020-03-29 02:43:28 -04:00
gaasedelen d93b52354f make it so that partially executed nodes do not have their whole graph node colored 2020-03-28 23:03:24 -04:00
gaasedelen e0d309025f improve lighthouse's accuracy on interleaved instructions 2020-03-25 06:22:45 -04:00
gaasedelen c57296e649 fix improper usage of traceback 2020-03-25 03:11:58 -04:00
gaasedelen c9788723ba remove unecessary use of map, can cause ambiguity if left unevaluated 2020-03-25 03:11:04 -04:00
gaasedelen 29ecafd885 fixes bug where binja could hang while building metadata 2020-03-22 03:10:27 -04:00
gaasedelen b9d514823b shady temp fix to make lighthouse mostly work with current binja 2020-03-21 04:36:28 -04:00
gaasedelen 2eab6d902c python 3 compat tweaks 2020-03-21 04:36:28 -04:00
gaasedelen 1ddffa5a19 most of the plumbing for supporting an image rebase, barring more testin... 2020-03-21 04:36:28 -04:00
gaasedelen e70c246637 make BADADDR more 'universal' 2020-03-21 04:36:28 -04:00
gaasedelen c0afc3b0c1 create 'is disassembelr busy' api 2020-03-21 04:36:28 -04:00
gaasedelen 3fd3640517 Improves coverage exception warning code 2020-03-21 04:36:28 -04:00
gaasedelen e4ecc0dafd fixes minor Qt warning for IDA 7.4 2020-03-21 04:36:28 -04:00
Jakob Pearson ac12ef74db Minor changes, to compile on Windows with VS2017 (#69)
* Update build-x64.bat

* Update build-x86.bat

* Update ImageManager.cpp

* Update CodeCoverage.cpp

* Update CodeCoverage.cpp

* Update ImageManager.cpp
2019-10-22 12:13:27 -07:00
yrp 8593e976d1 Derp, syntax... 2019-10-22 15:06:42 -04:00
yrp b505af9956 Allow blank and comments in modoff 2019-10-22 15:06:42 -04:00
yrp 7578faea82 Fix directory types
Tbh I didn't analyze this in depth -- it was a type error causing
lighthouse to bail out so I just mashed everything into a set. It seems
to work tho, so #shipit
2019-10-22 15:06:42 -04:00
Dominik Maier 7caa33df2e Fixed python3 bytesstrings (#71) 2019-10-22 15:03:17 -04:00
gaasedelen 9b1d579d3d improve disassembler logging compatibility 2019-05-18 16:56:11 -04:00
gaasedelen f6932bd8d0 adds qt_mainthread global 2019-05-17 19:17:39 -04:00
gaasedelen 80404a9f57 make disassembler version fields less annoying 2019-05-17 14:08:25 -04:00
volokitinss 366df2f5d8 Match upper case hex symbols as well (#67) 2019-05-10 12:29:52 -04:00
gaasedelen 7303d72fa2 python 3 compat 2019-04-19 19:58:29 -04:00
gaasedelen 8a656b10d3 tweak: make the function name column left aligned by default 2019-04-19 19:53:03 -04:00
gaasedelen 80d65fd584 styling consistency improvements 2019-04-19 19:52:34 -04:00
gaasedelen 8a03f035a1 improve shell / combobox styling 2019-04-17 17:15:18 -04:00
gaasedelen 9558763e52 bugfix: collapse the coverage combobox when its head is clicked in the expanded state 2019-04-16 18:24:40 -04:00
gaasedelen b82dba0d60 tweak: removes unnecessary composer shell border imposed by some Qt configs 2019-04-16 14:52:32 -04:00
gaasedelen e6f02ab873 allow smooth horizontal scrolling in xref window 2019-04-02 17:20:57 -04:00
gaasedelen 8316012782 switch xref attribution to bb granularity 2019-04-02 17:03:15 -04:00
gaasedelen fa3a13a085 allow for xrefing a batch 2019-04-02 12:26:42 -04:00
gaasedelen d6d0fbc7dd allow for smooth horizontal scrolling in coverage table 2019-04-01 13:54:42 -04:00
gaasedelen 1463949921 Merge branch 'fbl_block_blame' into develop, resolves #8 2019-03-31 19:56:36 -04:00
gaasedelen b6bf203b8c cleanup IDA integration of xrefs 2019-03-31 19:48:30 -04:00
gaasedelen 8052798c23 cleanup the xref dialog a bit 2019-03-31 19:17:47 -04:00
gaasedelen f89f3609f9 fix issue where coverage names are not saved to composed sets 2019-03-31 19:06:57 -04:00
gaasedelen 02d52fce73 add timestamps to coverage 2019-03-31 18:55:24 -04:00
gaasedelen 44cb1c8113 rough coverage xref dialog 2019-03-31 17:57:57 -04:00
gaasedelen dd50a3a8c8 rough draft of coverage xref for ida #8 2019-03-30 17:52:23 -04:00
gaasedelen ffc45f86a8 further refactor some loading code 2019-03-30 14:13:50 -04:00
gaasedelen e4ea2956e8 improve batch loading, refactor loading in general 2019-03-30 14:13:49 -04:00
gaasedelen 5468bef842 refactoring some loading code, add 'trace' format 2019-03-30 14:12:56 -04:00
gaasedelen 420c735600 rename testcases 2019-03-30 14:11:17 -04:00
gaasedelen beb715cf9b tweaks to reader & cov file code 2019-03-30 14:11:17 -04:00
gaasedelen 72fe0f90e6 add instruction sizes to metadata cache 2019-03-30 14:10:11 -04:00
xarkes 7cab1c1db9 Added python3 support (#62)
* Added Python3 support
* remove dependency of six library
* fixups based on Binja with Py3.6
2019-03-17 20:56:28 -04:00
gaasedelen b27698520b split metadata cache out of director 2019-03-15 19:48:57 -04:00
gaasedelen a55ede77f9 bugfix: deleted functions were not properly removed from metadata cache on refresh 2019-03-15 19:47:26 -04:00
gaasedelen 0f51554d8c suppress some IDA warning messages during func renames 2019-03-15 19:43:26 -04:00
gaasedelen 50cfd522f5 fix bad code in recent coverage loading work 2019-03-15 19:42:17 -04:00
gaasedelen 03717b03a2 rough plumbing for additional coverage formats #41 2019-03-10 16:06:10 -04:00
gaasedelen 92dfae662c Merge pull request #53 from v-p-b/binja_api_fix
Binary Ninja API fixups
2019-02-15 15:41:23 -05:00
gaasedelen 3b1c776455 Merge pull request #56 from AndrewFasano/master
Add python3 support for drcov parser
2019-01-31 12:37:04 -05:00
Andrew Fasano 5423bbf7e9 Add python3 support for drcov parser
And remain compatible with python2
2019-01-25 15:50:24 -05:00
b 0a5870a65a Added comments 2019-01-06 09:36:18 +01:00
b f3fd77e863 Stable+Dev compatibility 2019-01-06 08:55:50 +01:00
b 4b5ceabed1 Binja API: Getters -> Data members 2019-01-02 16:06:27 +01:00
gaasedelen 675cc8738d Merge branch 'develop' 2018-12-12 13:49:09 -05:00
gaasedelen 1ab2212115 Merge pull request #51 from yrp604/patch-1
Fix buggy drcov parsers
2018-12-12 13:37:59 -05:00
gaasedelen 713ac9d2aa Merge branch 'develop' into patch-1 2018-12-12 13:37:34 -05:00
_yrp 357bc5f839 Fix buggy drcov parsers
These two are definitely wrong, the other one might be too.

Found when some jerk compiled their own linux drcov plugin -- otherwise the precompiled binaries still spit out v2 module tables.
2018-12-11 20:45:35 -08:00
gaasedelen 8d60f778ef fixes botched fix, #50 2018-12-09 21:37:03 -05:00
gaasedelen 677723ed18 group lighthouse plugin menus in binja 2018-12-02 07:54:45 -05:00
gaasedelen b6e9ffe576 update to work with binja dev (breaks stable :-x) 2018-12-02 07:49:22 -05:00
gaasedelen dae71152e4 fixes thread safety issue in IDA 7.2, #50 2018-12-02 07:35:07 -05:00
gaasedelen f071dc6dce minor updates to pin documentation, #46 2018-11-18 20:59:21 -05:00
gaasedelen 83a2d71402 Merge pull request #49 from Ayrx/revert-47-improve-frida-script
Revert "Improve Frida coverage collection on quick exits."
2018-11-06 14:27:58 -05:00
Terry Chia e890ac60e9 Revert "Improve Frida coverage collection on quick exits." 2018-11-06 16:04:19 +08:00
gaasedelen 2b82bb69d5 fix edge case that could crash painter thread 2018-11-06 01:00:22 -05:00
gaasedelen 7cc235bde4 fix pintool on newer versions of gcc #46 2018-11-05 20:34:29 -05:00
gaasedelen 53e791b96e Merge pull request #47 from Ayrx/improve-frida-script
Improve Frida coverage collection on quick exits.
2018-10-31 14:36:21 -04:00
Terry Chia 48fd65de34 Improve Frida coverage collection on quick exits. 2018-10-30 14:48:20 +08:00
gaasedelen e6cf1d8292 Update drcov format v4 parsing for Windows, #45 2018-10-19 15:05:37 -04:00
gaasedelen 04225a0351 avoid division by zero, fixes #44 2018-10-19 15:00:45 -04:00
gaasedelen d3ba2c0ad7 hotfix for binja versioning bug #42 2018-10-12 11:27:20 -04:00
gaasedelen a81aa6d590 hotfix for personal binja licenses 2018-10-11 10:39:01 -04:00
gaasedelen fbbdd91576 Merge branch 'hotfix-0.8.1' 2018-10-03 01:45:19 -04:00
gaasedelen 03cc67e629 further tweak table row height based on DPI 2018-10-03 01:44:41 -04:00
gaasedelen 116f82e6bc only disable painting (on clear) if other coverage is open 2018-10-02 16:45:42 -04:00
gaasedelen 3f22eddc82 auto-cache metadata when opening coverage overview 2018-10-02 16:44:15 -04:00
gaasedelen 2bc6f109db fix for minor regression 2018-10-02 16:00:25 -04:00
gaasedelen 316e7622d7 bumps version number 2018-09-29 17:40:44 -04:00
gaasedelen bf66b02df9 fixes DPI regressions 2018-09-29 17:39:24 -04:00
gaasedelen bd8862923e fixes regression that could cause IDA 6.8 --> 7.0 to hang on exit 2018-09-29 17:34:43 -04:00
gaasedelen 8e09989a67 Merge branch 'develop' 2018-09-23 17:53:35 -04:00
gaasedelen d81e33f5f5 updated readme 2018-09-23 17:51:49 -04:00
gaasedelen 35b45b2a7b minor bugfix 2018-09-23 16:29:01 -04:00
gaasedelen fe0ff33d38 bump version number 2018-09-22 21:17:51 -04:00
gaasedelen 0f2b9897f8 Merge branch 'master' into develop 2018-09-22 20:10:12 -04:00
gaasedelen 72725d7b86 disable lighthouse in binja headless 2018-09-22 20:08:05 -04:00
gaasedelen aaea2cb472 add filepath tooltip to combobox 2018-09-21 21:07:27 -04:00
gaasedelen d1fc0f7f06 comment cleanup, spellchecks, etc 2018-09-21 20:33:17 -04:00
gaasedelen b6d52dad5c fix binja bug where blank names did not reset 2018-09-04 12:30:37 -04:00
gaasedelen 419b77aebe minor fixes, painting, table 2018-09-04 11:58:42 -04:00
gaasedelen 7946d32edd improve cross-platform styling/appearance 2018-09-03 18:05:19 -04:00
gaasedelen 8ea8f6c2d7 fix crashes & hacks for binja dock widget 2018-09-02 19:07:58 -04:00
gaasedelen c532f1f658 minor fixes 2018-09-02 18:07:42 -04:00
gaasedelen e4a1c1cd70 refactor some metadata collection code, fix some binja deadlocks 2018-09-02 18:07:25 -04:00
gaasedelen 5dc9c21141 update disassembler shimming/API a bit 2018-09-02 17:34:15 -04:00
gaasedelen 13e601610a adds basic export to HTML functionality 2018-09-02 10:19:55 -04:00
gaasedelen 5d2abe6162 refactor DatabaseCoverage instantiation, director 2018-09-01 20:58:29 -04:00
gaasedelen 4a26f7b040 minor fixes 2018-09-01 12:46:26 -04:00
gaasedelen 2f78f4638b refactor coverage loading, adds more verbose warnings 2018-09-01 12:34:46 -04:00
gaasedelen 3defabc4f9 add a property to identify abnormal coverage mappings 2018-09-01 12:34:46 -04:00
gaasedelen 0222c1a16a more explicit warnings for coverage load failures 2018-09-01 12:34:45 -04:00
gaasedelen 6a08093d2f Merge branch 'fbl_table_rework' into develop 2018-09-01 12:22:09 -04:00
gaasedelen b98baa6206 minor fixes 2018-08-30 23:13:35 -04:00
gaasedelen b68c9e4dc5 creates coverage settings menu 2018-08-30 23:02:28 -04:00
gaasedelen c7f80b7bfe refactor some core painting 2018-08-30 23:01:49 -04:00
gaasedelen 4baaa3500d minor fix 2018-08-30 23:00:32 -04:00
gaasedelen 011b06d936 hopefully more stable painting for IDA 6.8 -> 7.1 2018-08-27 02:11:42 -04:00
gaasedelen dfe7407e67 allow the painter to be enabled/disabled 2018-08-27 02:08:51 -04:00
gaasedelen 9b2d8b8273 binary ninja now supports original filename 2018-08-23 04:16:30 -04:00
gaasedelen 8df09f23d5 some plumbing for potential future features 2018-08-17 20:41:42 -04:00
gaasedelen 3300c5abcf cleanup CoverageTableModel 2018-08-14 19:58:40 -04:00
gaasedelen 109fcd1aac remove random file :') 2018-08-14 18:45:16 -04:00
gaasedelen 0aa2b9602f enable table navigation with arrow keys & vim keys 2018-08-12 13:41:50 -04:00
gaasedelen fbf0a0da10 combobox theme enlightenment 2018-08-11 23:36:17 -04:00
gaasedelen a7166eb9f2 enlighten shell hint to palette colors 2018-08-11 22:57:45 -04:00
gaasedelen f4ba88f581 fix regression with IDA 2018-08-11 22:23:37 -04:00
gaasedelen b0c4f16ee4 make the shell hint even less agressive 2018-08-11 22:16:04 -04:00
gaasedelen 868e071483 split coverage table into Model/View/Controller 2018-08-11 22:05:10 -04:00
gaasedelen 530941195a misc 2018-08-11 19:03:03 -04:00
gaasedelen 20305574a2 made composing shell hint less aggressive 2018-08-11 14:14:53 -04:00
gaasedelen 5c84de332a cleanup IDA painting a bit 2018-08-11 13:58:56 -04:00
gaasedelen 0ad5ac72d8 fix rename events misfiring on binja 2018-08-11 13:58:36 -04:00
gaasedelen 8610983f98 better cyclomatic complexity 2018-08-11 13:57:32 -04:00
gaasedelen 8cbc9d9bef toggle coverage table column alignment 2018-08-10 20:00:57 -04:00
gaasedelen 53f58e5560 adds experimental support for Binary Ninja 2018-08-10 12:46:13 -04:00
gaasedelen c509770fc3 some cleanup, fixes a few IDA compatibility breakages 2018-08-09 20:09:33 -04:00
gaasedelen 33e054ce34 update dev scripts 2018-08-09 20:08:15 -04:00
gaasedelen c48af5b9d0 fixes for renaming/prefixes via table 2018-08-09 18:10:22 -04:00
gaasedelen de064f088e makes lighthouse theme aware for binja 2018-08-09 17:00:51 -04:00
gaasedelen bf37c29dc0 refactored Qt usage, dockable binja widgets 2018-08-09 15:07:04 -04:00
gaasedelen ea3fbcb6f8 catch binaryninja rename events 2018-08-07 23:18:35 -04:00
gaasedelen 1492ea0c20 *greatly* improved binja metadata collection speed 2018-08-07 20:14:40 -04:00
gaasedelen 92797f30b0 improved binja painting 2018-08-07 18:28:04 -04:00
gaasedelen 6cadbf998e updated & categorized TODO's 2018-08-07 14:25:07 -04:00
gaasedelen 5fdb2f1735 added other lighthouse menu options to binja 2018-08-07 13:50:58 -04:00
gaasedelen 895951ecc9 general table appearance improvements for binja 2018-08-07 13:29:51 -04:00
gaasedelen 4b587696ab fix bug parsing binja dev string 2018-08-07 13:28:21 -04:00
gaasedelen 74798d1fb8 slight tweaks to the combobox spacing/grid lines 2018-08-07 13:27:59 -04:00
gaasedelen fef71a7092 disassembler API now built on Abstract Classes 2018-08-06 17:17:52 -04:00
gaasedelen b75d4947b5 strip copied code from binja painter 2018-08-05 15:05:42 -04:00
gaasedelen 2b42355e94 improved painting performance on BN 2018-08-05 15:02:14 -04:00
gaasedelen 150a984bd9 enable logging in IDA launchers 2018-08-03 20:10:41 -04:00
gaasedelen ba0f5508a7 functioning binja prototype 2018-08-03 20:10:11 -04:00
gaasedelen 6b425c1977 split painting engine 2018-08-03 17:02:08 -04:00
gaasedelen 9e5e898952 core coverage loading works in binja 2018-08-03 11:09:09 -04:00
gaasedelen 9c2c9d8850 shim metadata collection 2018-08-02 23:41:07 -04:00
gaasedelen bf3389740c remove idaapi dep from coverage_overview 2018-08-02 20:15:54 -04:00
gaasedelen a9ada01183 refactor mainthread detection to be agnostic of disassmbler 2018-08-02 19:58:27 -04:00
gaasedelen fa605c8261 remove idaapi dep from coverage 2018-08-02 19:49:17 -04:00
gaasedelen 898788cdbb migrate to universal waitbox 2018-08-02 18:32:10 -04:00
gaasedelen c19ffd275c remove idaapi/idautils as core dependency 2018-08-02 18:15:25 -04:00
gaasedelen c0477c4724 move prompt_string to qt 2018-08-02 18:14:53 -04:00
gaasedelen 8996e4e009 Integration seperation 2018-08-02 16:42:38 -04:00
gaasedelen 7202e2f455 start of binja port 2018-08-01 17:42:22 -04:00
gaasedelen 258b901273 Revert "refactor coverage table code into its own QTableView"
This reverts commit 26f391b4b4.
2018-08-01 17:40:15 -04:00
gaasedelen d8db54dcbd Merge pull request #39 from yrp604/master
Improve frida coverage saving
2018-07-10 10:44:44 -04:00
yrp b92a25906f Make coverage saving more robust
The most bug prone part of frida (in our usage) seems to be process
detatching. Previously the script would detatch and then persist all
coverage data once detatching had succeeded. This of course means that
if detatching fails or hangs, all collected coverage is lost. This is
Not Good.

This commit adds a SIGINT handler which persists the existing coverage
before calling `_exit(1)`. While this doesn't seem to be an especially
quick operation, it does save the coverage before exiting.

This commit does not address Windows usage, or processes exiting in ways
other than SIGINT.
2018-07-10 00:24:36 -07:00
_yrp bb9c4bafd0 Merge pull request #1 from gaasedelen/master
merge upstream
2018-07-09 22:26:24 -07:00
gaasedelen 5028b163d7 Merge pull request #35 from vnd-eset/master
Add support for ASCII DrCov logs
2018-04-27 12:54:45 -04:00
gaasedelen 1cbff296af Merge branch 'hotfix-0.7.2' 2018-04-23 21:42:54 -04:00
gaasedelen 36127e74ef hotfix for new drcov log formats, issue #36 2018-04-23 21:42:07 -04:00
gaasedelen 26f391b4b4 refactor coverage table code into its own QTableView 2018-04-17 19:02:31 -04:00
Krzysztof Katowicz-Kowalewski d48f88d688 Add support for ASCII DrCov logs 2018-04-11 05:59:38 -07:00
gaasedelen e0cf26acc8 Merge pull request #33 from x9090/dev-x9090
Fixed missing node metadata due to overly aggressive filter
2018-03-23 13:02:35 -04:00
x9090 4d529e2844 Fixed missing node metadata due to overly aggressive filter 2018-03-20 11:55:45 +08:00
gaasedelen 006c46b472 hotfix for MacOS hangs (and crashes) on close #31 2017-12-29 23:16:16 -05:00
gaasedelen f2dff09b99 Merge pull request #28 from paulsapps/feature_copy_multiple_function_name_and_addresses
Allow copy operations to work on multiple selected functions
2017-12-17 20:44:11 -05:00
Paul 3024fd208d Allow copy operations to work on multiple selected functions 2017-12-18 00:51:00 +00:00
gaasedelen dc7ef504db Merge branch 'develop' 2017-12-07 05:30:04 -05:00
gaasedelen d715736940 updated README.md's 2017-12-07 05:27:53 -05:00
gaasedelen 4eb0b445ed version bump to v0.7.0 2017-12-07 05:27:15 -05:00
gaasedelen 724516f2f8 lower the 'big' IDB threshold to 50k functions to be safe 2017-12-07 05:23:42 -05:00
gaasedelen 7027d37260 improve pintool performance using PIN_FAST_ANALYSIS_CALL 2017-12-07 05:23:42 -05:00
gaasedelen e21e238903 updated frida README.md 2017-12-05 15:38:30 -08:00
gaasedelen 9e3a6fa4c4 fixes formatting error on print statements 2017-12-05 15:35:50 -08:00
gaasedelen 2775ae2922 cleaning up some types in pintool, closes #25 2017-12-05 16:48:29 -05:00
gaasedelen ad7062e556 implemented the full refresh via right click 2017-12-04 20:32:42 -05:00
gaasedelen 8a2aec35f9 bugfix: corrects database wide coverage % 2017-12-04 19:35:21 -05:00
gaasedelen 61d4cdb777 make jump case insensitive for 'sub_XXXXXXXX' targets 2017-12-04 17:36:28 -05:00
gaasedelen 10dd778233 exclude exception handling chunks from metadata 2017-12-04 17:09:44 -05:00
gaasedelen 6a1a326e36 fuzzier module name detection 2017-12-04 15:17:54 -05:00
gaasedelen 618b6ce278 color tweaks 2017-12-04 01:58:26 -05:00
gaasedelen c155755b17 improving function rename event handler 2017-12-03 20:48:01 -05:00
gaasedelen 1c7e4ebc6a keep local director reference in the coverage overview 2017-12-03 19:04:52 -05:00
gaasedelen 8e626a66a5 right click function prefixing in coverage overview 2017-12-03 16:59:29 -05:00
gaasedelen 46705fbb86 started right click contetxt menu for the coverage overview 2017-12-03 16:06:25 -05:00
gaasedelen 7428502471 symbolic deletion of the aggregate set (clear all) 2017-12-03 01:45:27 -05:00
gaasedelen fa43bb8aab set default table color background for consistency 2017-12-03 00:05:31 -05:00
gaasedelen 54e5e13628 bugfix: coverage % of shell search are now size relative 2017-12-03 00:04:26 -05:00
gaasedelen 99e758ae21 only log with LIGHTHOUSE_LOGGING env variable set, fixes #12 2017-12-02 23:18:19 -05:00
gaasedelen 61804887ee update metadata cache on function rename, fixes #23 2017-12-02 22:50:15 -05:00
gaasedelen 5769533f94 renamed _database_metadata, unnecessary and confusing 2017-12-02 22:28:35 -05:00
gaasedelen 0bc487668d move callback plumbing out of the director 2017-12-02 22:25:50 -05:00
gaasedelen c070308f49 abstract away metadata caching in the director 2017-12-02 21:46:06 -05:00
gaasedelen 112034c565 bugfix: fixed a race condition that would crash lighthtouse when saving compositions 2017-12-01 13:48:49 -05:00
gaasedelen 58f1260f0c bugfix: some IDA 7 users (AUTOIMPORT_COMPAT_IDA695=NO) could not save compositions 2017-12-01 11:11:51 -05:00
gaasedelen d39f07c5a7 disable a verbose/heavy hexrays log item 2017-11-29 18:37:36 -05:00
gaasedelen e071528cd9 defer composer shell searches on large databases 2017-11-29 18:36:58 -05:00
gaasedelen 7038a74f74 disable metadata refresh on coverage load after initial cache 2017-11-29 16:05:22 -05:00
gaasedelen d6853f1358 fix missing icons bug (install location dependent) 2017-11-01 16:49:59 -04:00
gaasedelen 4c4863ff21 removes old dev script 2017-11-01 09:38:05 -04:00
gaasedelen c233f3a606 UI_Hooks seem straight broken for some users, removing. 2017-11-01 09:36:43 -04:00
gaasedelen ca4524df89 dock Coverage Overview in debug mode, fixes #16 2017-10-31 16:24:57 -04:00
_yrp 1aea26d018 Adds frida script for gathering code coverage (#17) 2017-10-24 12:35:50 -04:00
gaasedelen 6a80d3421f Name demangling IDA 6.8 --> IDA 7.0, issue #13 2017-10-01 14:07:38 -04:00
gaasedelen 638bff9fef merging hotfix from master to develop 2017-10-01 12:31:18 -04:00
gaasedelen a6eac8aa83 hotfix for IDA 7 users running with AUTOIMPORT_COMPAT_IDA695 = NO 2017-10-01 12:22:02 -04:00
Florian Magin 8e27e00cda Change displayed function names to demangled names (IDA<=6.95) 2017-09-29 11:36:21 -04:00
gaasedelen a149769935 Merge branch 'develop' 2017-09-21 13:44:30 -07:00
gaasedelen fe4ea54994 updated README.md 2017-09-21 13:42:56 -07:00
gaasedelen a82378e393 version bump to v0.6.0 2017-09-21 12:51:09 -07:00
gaasedelen ec6c66149c 32bit & 64bit build scripts for windows pintool 2017-09-21 11:48:49 -07:00
gaasedelen efe4880728 pseudo merge of fbl_pintool 2017-09-20 11:49:47 -07:00
gaasedelen 7dcb58dc44 cleaning up a few TODO's 2017-09-20 09:40:44 -07:00
gaasedelen 177f462e82 fix rare crash or hang of IDA on close 2017-09-15 06:25:46 -07:00
gaasedelen cfef581b87 fixes drcov log format for DR 7.0.0 on linux. #10 2017-08-29 12:25:57 -07:00
gaasedelen 3d7ae9e459 speed and memory performance increases for metadata cache 2017-08-12 10:34:22 -07:00
gaasedelen 4108647933 fixes bug with IDA 7 / PyQt5 2017-08-11 17:07:24 -07:00
gaasedelen d9c223c1a0 bugfix for possible underflow during priority painting 2017-08-11 16:26:22 -07:00
gaasedelen 25e548e277 Merge branch 'fbl_batch' into develop 2017-08-11 14:22:09 -07:00
gaasedelen 7e16ec6cc7 more reliable color selection 2017-08-11 14:21:26 -07:00
gaasedelen 501c0306cf updated batch icon to be less awful 2017-08-10 20:05:50 -07:00
gaasedelen 259bc72f3b refactor & simplify loading process flow 2017-08-10 19:47:40 -07:00
gaasedelen 7901ac7f10 added prompt_string() as a non-IDA blocking string input dialog 2017-08-10 19:46:48 -07:00
gaasedelen 36ca02da40 added batch.png icon 2017-08-10 19:46:01 -07:00
gaasedelen 78d0ccc553 removing the self._director reference from the CoverageOverview 2017-08-10 11:30:04 -07:00
gaasedelen 6bde62c677 fixes potentially unsafe behavior 2017-08-10 09:57:05 -07:00
gaasedelen f563be0247 fixes issue #9 2017-08-10 09:54:59 -07:00
gaasedelen 26ee4eb2bc rough implementation of batch loading 2017-08-09 20:53:34 -07:00
gaasedelen b356dce551 misc updates 2017-08-09 20:50:30 -07:00
gaasedelen 589f69dc3f add shorthand peeking to the director 2017-08-09 20:35:58 -07:00
gaasedelen 4395b0cbdf fix for shell regression 2017-08-09 20:25:24 -07:00
gaasedelen 33ff18447c added batch menu items 2017-08-09 16:55:44 -07:00
gaasedelen 9cadf4cb25 refactor naming of 'load_coverage' to 'load_file' 2017-08-09 16:38:51 -07:00
gaasedelen 588f73f931 make the coverage object's unmap_all a public function 2017-08-09 16:24:20 -07:00
gaasedelen 62e712417d rename add_coverage to create_coverage 2017-08-09 16:23:34 -07:00
gaasedelen 2fb4b1b1e3 created add_addresses for coverage objects 2017-08-09 16:17:41 -07:00
gaasedelen d5446d8a91 better handling of malformed input / wrong files 2017-08-08 23:23:39 -07:00
gaasedelen 3b402839f6 reuse & refresh existing coverage overview when available 2017-08-08 20:42:59 -07:00
gaasedelen a91d04df4a bugfix a regression from refactoring 2017-08-08 19:40:15 -07:00
gaasedelen 945b2a6121 IDA 7 compatibility fixes 2017-08-08 19:35:27 -07:00
gaasedelen b392e416ed refactor of get_node() and get_function() 2017-08-08 15:33:24 -07:00
gaasedelen 71a896ce0e small naming refactor 2017-08-08 15:31:50 -07:00
gaasedelen bf8866f3ea added cyclomatic complexity metric 2017-08-08 15:30:49 -07:00
gaasedelen 46b6dd86f5 store metadata for intra-function edges 2017-08-08 15:29:01 -07:00
gaasedelen 9f05fdf689 case insensitive search when no capitilzation present 2017-08-07 18:53:21 -07:00
gaasedelen 5ecf965342 remember & reuse the last directory coverage was loaded from 2017-08-07 17:46:36 -07:00
gaasedelen 9073fe5d90 replace the '%' character with '_' in lifted function names 2017-08-07 17:27:34 -07:00
gaasedelen 72b7a48b8a bugfix: fixes crash on close for some builds of IDA 2017-08-07 17:15:16 -07:00
gaasedelen 2e88ffb4ca bugfix: hexrays hooks were not getting installed for some builds of IDA #7 2017-08-07 16:45:21 -07:00
gaasedelen dbb1dbbcab wait for threads to exit while unloading plugin 2017-08-07 16:40:41 -07:00
gaasedelen 070923f945 bugfix: stops double init of some plugin members 2017-08-07 15:40:12 -07:00
gaasedelen 11f108bca7 more verbose debug prints for issue #7 2017-07-25 12:10:35 -07:00
gaasedelen 0a8e1cc9ed Merge branch 'develop' 2017-07-18 17:30:53 -07:00
gaasedelen fce5a09df8 updated README.md 2017-07-18 17:30:02 -07:00
gaasedelen 5c6d5ce9b7 version bump to v0.5.0 2017-07-18 17:29:28 -07:00
gaasedelen 0c8c8d960a removing pintool for now (see fbl_pintool branch) 2017-07-18 17:26:59 -07:00
gaasedelen 652469ddc8 bufix for jumping in IDA 6.95, 7.0 2017-07-18 17:13:23 -07:00
gaasedelen cdd5c99726 fixes selection text color on 6.95 2017-07-16 20:31:54 -07:00
gaasedelen a82f9c5d0a highlight search query text 2017-07-16 20:31:37 -07:00
gaasedelen 70def45b1b fixes goosetin's derp 2017-07-16 20:30:43 -07:00
gaasedelen 440a7e0711 some cleanup of coverage, director files 2017-07-16 15:14:57 -07:00
gaasedelen 910fc02cc5 cleanup main plugin file 2017-07-16 15:11:45 -07:00
gaasedelen fc9b4084af Merge pull request #5 from agustingianni/adds_pintool
Adds pintool to gather coverage
2017-07-16 15:07:03 -07:00
Agustin Gianni a4e23cd5ba Adds pintool for code coverage logging
Implements a pintool that gathers code coverage information and saves it
using drcov format.
2017-07-16 20:41:54 +02:00
Agustin Gianni 73139c11d7 Fixes import of 'QPlainTextEdit' 2017-07-16 15:44:43 +02:00
gaasedelen e8c84238b2 improving stability 2017-07-15 15:40:10 -07:00
gaasedelen 18cf68acb0 changed the active selection color in the coverage table 2017-07-15 08:52:43 -07:00
gaasedelen 7ff189c69a added jump functionality to shell 2017-07-15 08:03:42 -07:00
gaasedelen ab4c1344f1 function_address --> row lookup for coverage table 2017-07-15 07:39:38 -07:00
gaasedelen b69daa9789 added get_function, get_function_by_name to metadata 2017-07-15 07:38:52 -07:00
gaasedelen e1c0965f22 misc cleanup, reorganization 2017-07-14 22:10:42 -07:00
gaasedelen 84a9f9fc62 Merge branch 'fbl_search' into develop 2017-07-14 19:54:46 -07:00
gaasedelen aa74ee592b hide vertical headers 2017-07-14 19:53:30 -07:00
gaasedelen 760aa0d976 bugfix to prevent lost cursor in the composer 2017-07-14 18:05:59 -07:00
gaasedelen baaccd1097 display the search/filter's coverage % in real-time 2017-07-14 17:57:01 -07:00
gaasedelen e211c94a2b performance improvements when loading file batch 2017-07-14 09:04:47 -07:00
gaasedelen c14437e2fc limit combobox to coverage with shorthand symbols 2017-07-14 09:04:34 -07:00
gaasedelen 3e9e4d78e3 fixes possible coverage inaccuracies 2017-07-14 09:04:20 -07:00
gaasedelen d67db0f5de cleaning up CoverageOverview and children 2017-07-13 18:46:53 -07:00
gaasedelen 25cce0ff36 fixes for previous commits 2017-07-13 08:24:06 -07:00
gaasedelen 58171381e1 naive search implementation 2017-07-12 22:04:02 -07:00
gaasedelen 782918a845 shell palette alias 2017-07-12 21:56:16 -07:00
gaasedelen a9914da34d alias the shell's self._line.toPlainText() 2017-07-12 21:54:52 -07:00
gaasedelen 5caa13bcd1 placeholder highlight functions 2017-07-12 20:40:26 -07:00
gaasedelen c544431702 groundwork for additional shell actions 2017-07-12 20:39:32 -07:00
gaasedelen aa16950c30 clear existing text format/colors before new highlight 2017-07-12 20:30:56 -07:00
gaasedelen c6298f235e disable bolding of table column headers when table is selected 2017-07-12 20:28:23 -07:00
gaasedelen 9bba79bf58 dev script additions 2017-07-11 23:20:01 -07:00
gaasedelen b575636d7b shim cleanup 2017-07-11 22:12:14 -07:00
gaasedelen 8a9dd37cb7 instability bugfix which could cause IDA to abort() 2017-07-11 22:08:56 -07:00
gaasedelen a0e893c27d IDA 7 support 2017-07-11 21:24:37 -07:00
gaasedelen b5e6d2f827 composing shell now readable on light/default theme (regression) 2017-07-10 15:13:42 -07:00
gaasedelen 20ace40fe3 support for IDA 7.0 2017-07-10 15:11:54 -07:00
gaasedelen c168514039 Merge branch 'develop' 2017-06-20 23:39:01 -07:00
gaasedelen 24687740f2 updated README.md 2017-06-20 23:37:48 -07:00
gaasedelen 9968d3c9e3 version bump to v0.4.0 2017-06-20 23:20:17 -07:00
gaasedelen 99ed7cb043 fixes bug where hotshell would not clear on empty input 2017-06-20 22:52:13 -07:00
gaasedelen 3309ad834a fixes some rare synchronization races 2017-06-20 22:38:21 -07:00
gaasedelen 2c7dd3f98b show coverage hint only when shell is active 2017-06-19 22:29:54 -07:00
gaasedelen b68dc15bea black table grid lines, force dark table theme 2017-06-19 21:24:10 -07:00
gaasedelen 7863af7eae cleanup ida.py 2017-06-19 18:02:15 -07:00
gaasedelen 4af87884b3 refactor / moving stuff around 2017-06-19 08:22:38 -07:00
gaasedelen 0dee80bfc8 split out debug/profiling code 2017-06-18 15:41:52 -07:00
gaasedelen 3b57d1aa28 some cleanup of painting.py 2017-06-18 14:08:18 -07:00
gaasedelen 327e5e534e cleanup coverage.py 2017-06-18 13:18:58 -07:00
gaasedelen a2e6e2af7f cleanup the main file a bit 2017-06-18 10:30:00 -07:00
gaasedelen 5f57590672 migrated hexrays hooks to the CoveragePainter 2017-06-17 15:28:44 -07:00
gaasedelen 1a0f56327e switching to a QTableView & QAbstractTableModel 2017-06-17 14:25:15 -07:00
gaasedelen e51cc84d04 fixes "Insttructions" typo 2017-06-17 14:23:40 -07:00
gaasedelen c30dbeb295 Merge branch 'fbl_async' into develop 2017-06-17 11:38:50 -07:00
gaasedelen 3d2164e90d fixes regression to give dark background to 0% table entries 2017-06-17 11:36:53 -07:00
gaasedelen e93c8c1855 fixes bug introduced by async operations 2017-06-17 11:36:15 -07:00
gaasedelen b927a27c39 asynchronous database painting 2017-06-17 10:27:50 -07:00
gaasedelen e609bfab23 asynchronous instruction painting 2017-06-17 08:48:40 -07:00
gaasedelen a26d7afb07 split up priority painting 2017-06-17 07:26:06 -07:00
gaasedelen eca4e2ac56 refactored repaint_function() 2017-06-16 14:27:54 -07:00
gaasedelen 9efaca1e34 dropping use of WeakValueDictonary for nodes, no longer needed 2017-06-16 14:26:12 -07:00
gaasedelen 3cd2d2c090 started prototyping priority painting 2017-06-16 13:20:06 -07:00
gaasedelen ac64c0a498 added instructions properties to function metadata/coverage 2017-06-16 13:11:47 -07:00
gaasedelen cdb9cbb104 renaming executed_nodes to nodes 2017-06-16 13:11:11 -07:00
gaasedelen 282984b01e enlighten to new hexrays painting 2017-06-15 11:14:16 -07:00
gaasedelen b388ce5ba1 Started painting code refactor 2017-06-15 10:54:04 -07:00
gaasedelen 16aa6342b8 use await_future 2017-06-15 10:51:24 -07:00
gaasedelen 2f0e260801 refactor how compositions are saved 2017-06-15 10:37:36 -07:00
gaasedelen ecf20a591a asynchronous AST evaluation for the hotshell 2017-06-14 22:18:38 -07:00
gaasedelen 3170f88296 created the composition cache 2017-06-14 21:52:57 -07:00
gaasedelen c076a654d1 added a coverage mask hash for speedier operations 2017-06-14 21:52:32 -07:00
gaasedelen 76c4bfce15 implemented cancellation of metadata refresh 2017-06-12 21:14:18 -07:00
gaasedelen 942498348a removes painting from the director 2017-06-12 16:47:45 -07:00
gaasedelen 8123af561a speed up composer text highlighting 2017-06-12 16:33:28 -07:00
gaasedelen e46ae32131 prototyping asynchronous metadata collection 2017-04-16 21:33:49 -07:00
gaasedelen eb6c5d0709 additional cleanup / renaming 2017-04-12 23:22:33 -07:00
gaasedelen caac8cc352 a few more coverage vs hitmap code changes 2017-04-10 20:55:00 -07:00
gaasedelen 65a138ac31 starting seperation of coverage & hitmaps 2017-04-08 20:32:30 -07:00
gaasedelen 5e5db9d1e5 fixes combobox not eliding on IDA 6.95/Qt5 2017-04-07 18:07:58 -07:00
gaasedelen 952f063d58 added drcov v6.1.1 - v7.0.0-RC1 support, fixes #3 2017-04-07 17:50:25 -07:00
gaasedelen a6d7dd9726 Merge branch 'develop' 2017-04-01 14:42:41 -07:00
gaasedelen 6ca1f014c7 updated README.md 2017-04-01 14:29:33 -07:00
gaasedelen c961fc98ac version bump to v0.3.0 2017-04-01 14:26:20 -07:00
gaasedelen 5ac5471a09 Merge branch 'fbl_composing' into develop 2017-04-01 14:25:03 -07:00
gaasedelen e4b4acee7a bugfixes for coverage combobox selections 2017-04-01 13:49:02 -07:00
gaasedelen f239825490 bugfix for hotshell inaccuracies 2017-04-01 13:23:04 -07:00
gaasedelen 895a9a8422 6.95 compat fixes 2017-03-30 18:24:25 -07:00
gaasedelen 4d9bb9a5b9 cleanup to close out composing 2017-03-30 18:06:33 -07:00
gaasedelen c70a2c3452 delete coverage from combobox 2017-03-28 22:41:27 -07:00
gaasedelen c3b89e303b bug fixes 2017-03-26 09:42:54 -07:00
gaasedelen 04727f83fa implemented saving of new compositions 2017-03-26 08:22:27 -07:00
gaasedelen b4a8030ec9 IDA 6.95 fixes, theme awareness improvements 2017-03-25 20:46:56 -07:00
gaasedelen 3e71885478 added syntax highlighting to composition shell 2017-03-25 19:56:13 -07:00
gaasedelen 6b5a47a784 added coverage hints to the composer 2017-03-25 07:57:56 -07:00
gaasedelen fa68e123f8 updates to table, shell, now using monospace font 2017-03-19 14:44:07 -07:00
gaasedelen e2d90cf0fe coverage aliasing, more details in combobox 2017-03-19 11:59:31 -07:00
gaasedelen cef3b55f08 additional cleanup and refactor of shell stuff 2017-03-18 12:46:04 -07:00
gaasedelen 9f557e8d91 cleanup of director coverage access and shell 2017-03-18 11:16:53 -07:00
gaasedelen 2cb8ae836a prototype composer shell implementation 2017-03-17 22:07:01 -07:00
gaasedelen 12476fc55c prototyped composition grammar parser 2017-03-17 21:12:46 -07:00
gaasedelen a1b63234b0 Merge branch 'fbl_delta' into develop 2017-03-11 13:32:23 -08:00
gaasedelen e92790895e unpaint only the difference between coverage sets 2017-03-11 12:46:10 -08:00
gaasedelen 78a8c69010 added logical operations to coverage sets 2017-03-11 12:09:42 -08:00
gaasedelen bec0a9a225 migrate coverage tracking to hit based dictionary 2017-03-11 10:09:22 -08:00
gaasedelen 90a8b23637 elaborate on _weak_self 2017-03-11 08:44:43 -08:00
gaasedelen 1e7950aa11 reference metadata internally, was getting too cumbersome 2017-03-11 08:36:26 -08:00
gaasedelen 6bbaa4e37a enlighten coverage to metadata delta based updates 2017-03-10 22:46:47 -08:00
gaasedelen e0c0c4ff0c created MetadataDelta instead 2017-03-10 20:24:10 -08:00
gaasedelen e0a8e94d0a compute the delta between two database metadata objects 2017-03-10 19:35:52 -08:00
gaasedelen 9b14d86d26 make module name check case insensitive 2017-03-07 23:06:50 -08:00
gaasedelen a222a48278 Merge branch 'develop' 2017-03-07 22:13:31 -08:00
gaasedelen 152756d299 Merge branch 'fbl_multifile' into develop 2017-03-05 18:10:34 -08:00
gaasedelen 3e252d828f version bump to v0.2.0 2017-03-05 18:08:54 -08:00
gaasedelen 87885516b4 additional cleanup, unpainting, good enough 2017-03-04 19:48:11 -08:00
gaasedelen bf68655ed5 profiled & cleaned up metadata collection 2017-03-04 14:59:39 -08:00
gaasedelen 040377496b refactored code into multiple files, some cleanup 2017-03-04 13:32:26 -08:00
gaasedelen dc421a4255 prototype multifile support 2017-03-04 11:24:27 -08:00
gaasedelen dea4d8bf51 *way faster* drcov log parsing 2017-03-03 21:08:01 -08:00
gaasedelen 61881650c2 faster drcov log parsing using ctype structures 2017-03-03 21:02:41 -08:00
115 changed files with 33439 additions and 2337 deletions
+1 -1
View File
@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2017 Markus Gaasedelen
Copyright (c) 2017-2024 Markus Gaasedelen
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
+167 -39
View File
@@ -1,84 +1,212 @@
# Lighthouse - Code Coverage Explorer for IDA Pro
![Lighthouse Plugin](screenshots/painting.png)
# Lighthouse - A Coverage Explorer for Reverse Engineers
<p align="center">
<img alt="Lighthouse Plugin" src="screenshots/overview.gif"/>
</p>
## Overview
Lighthouse is a Code Coverage Plugin for [IDA Pro](https://www.hex-rays.com/products/ida/). The plugin leverages IDA as a platform to map, explore, and visualize externally collected code coverage data when symbols or source may not be available for a given binary.
Lighthouse is a powerful code coverage explorer for [IDA Pro](https://www.hex-rays.com/products/ida/) and [Binary Ninja](https://binary.ninja/), providing software researchers with uniquely interactive controls to study execution maps for native applications without requiring symbols or source.
There are no bells or whistles. This plugin is labeled only as a prototype and code example for the community.
This project placed 2nd in IDA's [2017 Plug-In Contest](https://hex-rays.com/contests_details/contest2017/) and was later [nominated](https://pwnies.com/lighthouse/) in the 2021 Pwnie Awards for its contributions to the security research industry.
Special thanks to [@0vercl0k](https://twitter.com/0vercl0k) for the inspiration.
## Releases
* v0.9 -- Python 3 support, custom coverage formats, coverage cross-refs, theming subsystem, much more.
* v0.8 -- Binary Ninja support, HTML coverage reports, consistent styling, many tweaks, bugfixes.
* v0.7 -- Frida, C++ demangling, context menu, function prefixing, tweaks, bugfixes.
* v0.6 -- Intel pintool, cyclomatic complexity, batch load, bugfixes.
* v0.5 -- Search, IDA 7 support, many improvements, stability.
* v0.4 -- Most compute is now asynchronous, bugfixes.
* v0.3 -- Coverage composition, interactive composing shell.
* v0.2 -- Multifile support, performance improvements, bugfixes.
* v0.1 -- Initial release
## Installation
# Installation
Install Lighthouse into the IDA plugins folder.
Lighthouse is a cross-platform (Windows, macOS, Linux) Python 2/3 plugin. It takes zero third party dependencies, making the code both portable and easy to install.
- Copy the contents of the `plugin` folder to the IDA plugins folder
- On Windows, the folder is at `C:\Program Files (x86)\IDA 6.8\plugins`
- On MacOS, the folder is at `/Applications/IDA\ Pro\ 6.8/idaq.app/Contents/MacOS/plugins`
- On Linux, the folder may be at `/opt/IDA/plugins/`
Use the instructions below for your respective disassembler.
The plugin has only been tested on IDA Pro 6.8, 6.95 for Windows.
## IDA Installation
## Usage
1. From IDA's Python console, run the following command to find its plugin directory:
- `import idaapi, os; print(os.path.join(idaapi.get_user_idadir(), "plugins"))`
2. Copy the contents of this repository's `/plugins/` folder to the listed directory.
3. Restart your disassembler.
Lighthouse loads automatically when an IDB is opened, installing the following menu entries into the IDA interface:
## Binary Ninja Installation
```
- File --> Load file --> Code Coverage File(s)...
- View --> Open subviews --> Coverage Overview
```
Lighthouse can be installed through the plugin manager on Binary Ninja, supporting v3.5 and newer.
These are the entry points for a user to load and view coverage data.
1. Open Binary Ninja's plugin manager by navigating the following submenus:
- `Edit` -> `Preferences` -> `Manage Plugins`
2. Search for Lighthouse in the plugin manager, and click the `Enable` button in the bottom right.
3. Restart your disassembler.
## Coverage Overview
# Usage
The Coverage Overview is a dockable widget that provides a function level view of the active coverage data for the database.
Once properly installed, there will be a few new menu entries available in the disassembler. These are the entry points for a user to load coverage data and start using Lighthouse.
![Lighthouse Coverage Overview](screenshots/overview.png)
<p align="center">
<img alt="Lighthouse Menu Entries" src="screenshots/open.gif"/>
</p>
This table can be sorted by column, and entries can be double clicked to jump to their corresponding disassembly.
Lighthouse is able to load a few different 'flavors' of coverage data. To generate coverage data that can be loaded into Lighthouse, please look at the [README](https://github.com/gaasedelen/lighthouse/tree/master/coverage) in the coverage directory of this repository.
## Coverage Painting
Lighthouse 'paints' the active coverage data across the three major IDA views as applicable. Specifically, the Disassembly, Graph, and Pseudocode views.
While Lighthouse is in use, it will 'paint' the active coverage data across all of the code viewers available in the disassembler. Specifically, this will apply to your linear disassembly, graph, and decompiler windows.
![Lighthouse Coverage Painting](screenshots/painting.png)
<p align="center">
<img alt="Lighthouse Coverage Painting" src="screenshots/painting.png"/>
</p>
## Collecting Coverage
In Binary Ninja, only the linear disassembly, graph, and IL views are supported. Support for painting decompiler output in Binary Ninja will be added to Lighthouse in the *near future* as the feature stabilizes.
At this time, Lighthouse only consumes binary coverage data as produced by DynamoRIO's [drcov](http://dynamorio.org/docs/page_drcov.html) code coverage module.
# Coverage Overview
Collecting blackbox coverage data with `drcov` is relatively straightforward. The following example demonstrates how coverage was produced for the `boombox.exe` testcase provided in this repository.
The Coverage Overview is a dockable widget that will open up once coverage has been loaded into Lighthouse.
<p align="center">
<img alt="Lighthouse Coverage Overview" src="screenshots/overview.png"/>
</p>
This interactive widget provides a function level view of the loaded coverage data. It also houses a number of tools to manage loaded data and drive more advanced forms of coverage analysis.
## Context Menu
Right clicking the table in the Coverage Overview will produce a context menu with a few basic amenities to extract information from the table, or manipulate the database as part of your reverse engineering process.
<p align="center">
<img alt="Lighthouse Context Menu" src="screenshots/context_menu.gif"/>
</p>
If there are any other actions that you think might be useful to add to this context menu, please file an issue and they will be considered for a future release of Lighthouse.
## Coverage ComboBox
Loaded coverage and user constructed compositions can be selected or deleted through the coverage combobox.
<p align="center">
<img alt="Lighthouse Coverage ComboBox" src="screenshots/combobox.gif"/>
</p>
## HTML Coverage Report
Lighthouse can generate rudimentary HTML coverage reports. A sample report can be seen [here](https://rawgit.com/gaasedelen/lighthouse/master/testcase/report.html).
<p align="center">
<img alt="Lighthouse HTML Report" src="screenshots/html_report.gif"/>
</p>
# Coverage Shell
At the bottom of the coverage overview window is the coverage shell. This shell can be used to perform logic-based operations that combine or manipulate the loaded coverage sets.
<p align="center">
<img alt="Lighthouse Coverage Composition" src="screenshots/shell.gif"/>
</p>
This feature is extremely useful in exploring the relationships of program execution across multiple runs. In other words, the shell can be used to 'diff' execution between coverage sets and extract a deeper meaning that is otherwise obscured within the noise of their individual parts.
## Composition Syntax
Coverage composition, or _Composing_ as demonstrated above is achieved through a simple expression grammar and 'shorthand' coverage symbols (A to Z) on the composing shell.
### Grammar Tokens
* Logical Operators: `|, &, ^, -`
* Coverage Symbol: `A, B, C, ..., Z, *`
* Parenthesis: `(...)`
### Example Compositions
1. Executed code that is *shared* between coverage `A` and coverage `B`:
```
..\DynamoRIO-Windows-7.0.0-RC1\bin64\drrun.exe -t drcov -- boombox.exe
A & B
```
This command will produce a `.log` file consisting of the coverage data upon termination of the target application.
2. Executed code that is *unique* only to coverage `A`:
```
A - B
```
## Other Coverage Sources
3. Executed code that is *unique* to `A` or `B`, but not `C`:
```
(A | B) - C
```
[drcov](http://dynamorio.org/docs/page_drcov.html) was selected as the initial coverage data source due to its availability, adoption, multi-platform (Win/Mac/Linux), and multi-architecture (x86/AMD64/ARM) support.
Expressions can be of arbitrary length or complexity, but the evaluation of the composition may occur right to left. So parenthesis are suggested for potentially ambiguous expressions.
Intel's [PIN](https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool) for example does not come with a default code coverage pintool. It appears that most implement their own solution and there is no clear format for Lighthouse to standardize on. In the future, Lighthouse may ship with its own pintool.
## Hot Shell
While Lighthouse is considered a prototype, internally it is largely agnostic of its data source. Future work will allow one to drop a loader into the `parsers` folder without any need for code changes to Lighthouse. Right now, this is not the case.
Additionally, there is a 'Hot Shell' mode that asynchronously evaluates and caches user compositions in real-time.
## Future Work
<p align="center">
<img alt="Lighthouse Hot Shell" src="screenshots/hot_shell.gif"/>
</p>
The hot shell serves as a natural gateway into the unguided exploration of composed relationships.
## Search
Using the shell, you can search and filter the functions listed in the coverage table by prefixing their query with `/`.
<p align="center">
<img alt="Lighthouse Search" src="screenshots/search.gif"/>
</p>
The head of the shell will show an updated coverage % computed only from the remaining functions. This is useful when analyzing coverage for specific function families.
## Jump
Entering an address or function name into the shell can be used to jump to corresponding function entries in the table.
<p align="center">
<img alt="Lighthouse Jump" src="screenshots/jump.gif"/>
</p>
# Coverage Cross-references (Xref)
While using Lighthouse, you can right click any basic block (or instruction) and use the 'Coverage Xref' action to see which coverage sets executed the selected block. Double clicking any of the listed entries will instantly switch to that coverage set.
<p align="center">
<img alt="Lighthouse Xref" src="screenshots/xref.gif"/>
</p>
This pairs well with the 'Coverage Batch' feature, which allows you to quickly load and aggregate thousands of coverage files into Lighthouse. Cross-referencing a block and selecting a 'set' will load the 'guilty' set from disk as a new coverage set for you to explore separate from the batch.
# Themes
Lighthouse ships with two default themes -- a 'light' theme, and a 'dark' one. Depending on the colors currently used by your disassembler, Lighthouse will attempt to select the theme that seems most appropriate.
<p align="center">
<img alt="Lighthouse Themes" src="screenshots/themes.png"/>
</p>
The theme files are stored as simple JSON on disk and are highly configurable. If you are not happy with the default themes or colors, you can create your own themes and simply drop them in the user theme directory.
Lighthouse will remember your theme preference for future loads and uses.
# Future Work
Time and motivation permitting, future work may include:
* Multi file/coverage support
* Nag Vector35 to fix HLIL highlighting ([bug](https://github.com/Vector35/binaryninja-api/issues/2584)) in Binary Ninja
* ~~Asynchronous composition, painting, metadata collection~~
* ~~Multifile/coverage support~~
* Profiling based heatmaps/painting
* Automatic parser pickup
* Parsers for additional coverage sources, eg PIN
* Improved Pseudocode painting
* Coverage & profiling treemaps
* ~~Additional coverage sources, trace formats, etc~~
* Improved pseudocode painting
* ~~Lighthouse console access~~, headless usage
* ~~Custom themes~~
* ~~Python 3 support~~
## Authors
I welcome external contributions, issues, and feature requests. Please make any pull requests to the `develop` branch of this repository if you would like them to be considered for a future release.
# Authors
* Markus Gaasedelen ([@gaasedelen](https://twitter.com/gaasedelen))
+11
View File
@@ -0,0 +1,11 @@
# Lighthouse - A Coverage Explorer for Reverse Engineers
<p align="center">
<img alt="Lighthouse Plugin" src="https://raw.githubusercontent.com/gaasedelen/lighthouse/master/screenshots/overview.gif"/>
</p>
## Overview
Lighthouse is a powerful code coverage explorer for [IDA Pro](https://www.hex-rays.com/products/ida/) and [Binary Ninja](https://binary.ninja/), providing software researchers with uniquely interactive controls to study execution maps for native applications without requiring symbols or source.
For additional usage information, please check out the full [README](https://github.com/gaasedelen/lighthouse) on GitHub.
+26
View File
@@ -0,0 +1,26 @@
import os
import sys
#------------------------------------------------------------------------------
# Binary Ninja 'Plugin Manager' Stub
#------------------------------------------------------------------------------
#
# This file is an alternative loading stub created specifically to
# support the ability to 'easy' install Lighthouse into Binary Ninja
# via its 'Plugin Manager' functionality.
#
# Please disregard this code / subdirectory if performing **manual**
# installations of Lighthouse in IDA or Binary Ninja.
#
lh_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "..", "plugins")
sys.path.append(lh_path)
from lighthouse.util.log import logging_started, start_logging
from lighthouse.util.disassembler import disassembler
if not logging_started():
logger = start_logging()
logger.info("Selecting Binary Ninja loader...")
from lighthouse.integration.binja_loader import *
+24
View File
@@ -0,0 +1,24 @@
{
"api": [
"python3"
],
"author": "Markus Gaasedelen",
"description": "A Coverage Explorer for Reverse Engineers",
"license": {
"name": "MIT",
"text": "Copyright (c) 2024> Markus Gaasedelen\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
},
"longdescription": "",
"minimumbinaryninjaversion": 4526,
"name": "Lighthouse",
"platforms": [
"Darwin",
"Linux",
"Windows"
],
"pluginmetadataversion": 2,
"type": [
"helper"
],
"version": "0.9.4"
}
+79
View File
@@ -0,0 +1,79 @@
# Collecting Coverage
Before using Lighthouse, one will need to collect code coverage data for their target binary / application.
The examples below demonstrate how one can use [DynamoRIO](http://www.dynamorio.org), [Intel Pin](https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool) or [Frida](https://www.frida.re) to collect Lighthouse compatible coverage against a target. The `.log` files produced by these instrumentation tools can be loaded directly into Lighthouse.
## DynamoRIO
Code coverage data can be collected via DynamoRIO's [drcov](http://dynamorio.org/docs/page_drcov.html) code coverage module.
Example usage:
```
..\DynamoRIO-Windows-7.0.0-RC1\bin64\drrun.exe -t drcov -- boombox.exe
```
## Intel Pin
Using a [custom pintool](pin/README.md) contributed by [Agustin Gianni](https://twitter.com/agustingianni), the Intel Pin DBI can also be used to collect coverage data.
Example usage:
```
pin.exe -t CodeCoverage64.dll -- boombox.exe
```
For convenience, binaries for the Windows pintool can be found on the [releases](https://github.com/gaasedelen/lighthouse/releases) page. macOS and Linux users need to compile the pintool themselves following the [instructions](pin/README.md#compilation) included with the pintool for their respective platforms.
## Frida (Experimental)
Lighthouse offers limited support for Frida based code coverage via a custom [instrumentation script](frida/README.md) contributed by [yrp](https://twitter.com/yrp604).
Example usage:
```
sudo python frida-drcov.py bb-bench
```
# Other Coverage Formats
Lighthouse is flexible as to what kind of coverage or 'trace' file formats it can load. Below is an outline of these human-readable text formats that are arguably the easiest to output from a custom tracer.
## Module + Offset (modoff)
A 'Module+Offset' coverage file / trace is a highly recommended coverage format due to its simplicity and readability:
```
boombox+3a06
boombox+3a09
boombox+3a0f
boombox+3a15
...
```
Each line of the trace represents an executed instruction or basic block in the instrumented program. The line *must* name an executed module eg `boombox.exe` and a relative offset to the executed address from the imagebase.
It is okay for hits from other modules (say, `kernel32.dll`) to exist in the trace. Lighthouse will not load coverage for them.
## Address Trace (Instruction, or Basic Block)
Perhaps the most primitive coverage format, Lighthouse can also consume an 'absolute address' style trace:
```
0x14000419c
0x1400041a0
0x1400045dc
0x1400045e1
0x1400045e2
...
```
Note that these address traces can be either instruction addresses, or basic block addresses -- it does not matter. The main caveat is that addresses in the trace *must* match the address space within the disassembler database.
If an address cannot be mapped into a function in the disassembler database, Lighthouse will simply discard it.
## Custom Trace Formats
If you are adamant to use a completely custom coverage format, you can try to subclass Lighthouse's `CoverageFile` parser interface. Once complete, simply drop your parser into the `parsers` folder.
+72
View File
@@ -0,0 +1,72 @@
# frida-drcov.py
In this folder you will find the code coverage collection script `frida-drcov.py` that run ontop of the [Frida](https://www.frida.re/) DBI toolkit. This script will produce code coverage (using Frida) in a log format compatible with [Lighthouse](https://github.com/gaasedelen/lighthouse).
Frida is best supported on mobile platforms such as iOS or Android, claiming some support for Windows, MacOS, Linux, and QNX. Practically speaking, `frida-drcov.py` should only be used for collecting coverage data on mobile applications.
This script is labeled only as a prototype.
## Install
To use `frida-drcov.py`, you must have [Frida](https://www.frida.re/) installed. This can be done via python's `pip`:
```
sudo pip install frida
```
## Usage
Once frida is installed, the `frida-drcov.py` script in this repo can be used to collect coverage against a running process as demonstrated below. By default, the code coverage data will be written to the file `frida-drcov.log` at the end of execution.
```
python frida-drcov.py <process name | pid>
```
Here is an example of us instrumenting the running process `bb-bench`.
```
$ sudo python frida-drcov.py bb-bench
[+] Got module info
Starting to stalk threads...
Stalking thread 775
Done stalking threads.
[*] Now collecting info, control-D to terminate....
[*] Detaching, this might take a second... # ^d
[+] Detached. Got 320 basic blocks.
[*] Formatting coverage and saving...
[!] Done
$ ls -lh frida-cov.log # this is the file you will load into lighthouse
-rw-r--r-- 1 root staff 7.2K 21 Oct 11:58 frida-cov.log
```
Using the `-o` flag, one can specify a custom name/location for the coverage log file:
```
python frida-drcov.py -o more-coverage.log foo
```
## Module Whitelisting
One can whitelist specific modules inside the target process. Say you have binary `foo` which imports the libraries `libfoo`, `libbar`, and `libbaz`. Using the `-w` flag (whitelist) on the command line, we can explicitly target modules of interest:
```
$ python frida-drcov.py -w libfoo -w libbaz foo
```
This will reduce the amount of information collected and improve performance. If no `-w` arguments are supplied, `frida-drcov.py` will trace all loaded images.
## Thread Targeting
On multi-threaded applications, tracing all threads can impose significant overhead. For these cases you can filter coverage collection based on thread id if you only care about specific threads.
In the following example, we target thread id `543`, and `678` running in the process named `foo`.
```
python frida-drcov.py -t 543 -t 678 foo
```
Without the `-t` flag, all threads that exist in the process at the time of attach will be traced.
# Authors
* yrp ([@yrp604](https://twitter.com/yrp604))
+343
View File
@@ -0,0 +1,343 @@
#!/usr/bin/env python
from __future__ import print_function
import argparse
import json
import os
import signal
import sys
import frida
"""
Frida BB tracer that outputs in DRcov format.
Frida script is responsible for:
- Getting and sending the process module map initially
- Getting the code execution events
- Parsing the raw event into a GumCompileEvent
- Converting from GumCompileEvent to DRcov block
- Sending a list of DRcov blocks to python
Python side is responsible for:
- Attaching and detaching from the target process
- Removing duplicate DRcov blocks
- Formatting module map and blocks
- Writing the output file
"""
# Our frida script, takes two string arguments to embed
# 1. whitelist of modules, in the form "['module_a', 'module_b']" or "['all']"
# 2. threads to trace, in the form "[345, 765]" or "['all']"
js = """
"use strict";
var whitelist = %s;
var threadlist = %s;
// Get the module map
function make_maps() {
var maps = Process.enumerateModulesSync();
var i = 0;
// We need to add the module id
maps.map(function(o) { o.id = i++; });
// .. and the module end point
maps.map(function(o) { o.end = o.base.add(o.size); });
return maps;
}
var maps = make_maps()
send({'map': maps});
// We want to use frida's ModuleMap to create DRcov events, however frida's
// Module object doesn't have the 'id' we added above. To get around this,
// we'll create a mapping from path -> id, and have the ModuleMap look up the
// path. While the ModuleMap does contain the base address, if we cache it
// here, we can simply look up the path rather than the entire Module object.
var module_ids = {};
maps.map(function (e) {
module_ids[e.path] = {id: e.id, start: e.base};
});
var filtered_maps = new ModuleMap(function (m) {
if (whitelist.indexOf('all') >= 0) { return true; }
return whitelist.some(item => m.name.toLowerCase().includes(item.toLowerCase()));
});
// This function takes a list of GumCompileEvents and converts it into a DRcov
// entry. Note that we'll get duplicated events when two traced threads
// execute the same code, but this will be handled by the python side.
function drcov_bbs(bbs, fmaps, path_ids) {
// We're going to use send(..., data) so we need an array buffer to send
// our results back with. Let's go ahead and alloc the max possible
// reply size
/*
// Data structure for the coverage info itself
typedef struct _bb_entry_t {
uint start; // offset of bb start from the image base
ushort size;
ushort mod_id;
} bb_entry_t;
*/
var entry_sz = 8;
var bb = new ArrayBuffer(entry_sz * bbs.length);
var num_entries = 0;
for (var i = 0; i < bbs.length; ++i) {
var e = bbs[i];
var start = e[0];
var end = e[1];
var path = fmaps.findPath(start);
if (path == null) { continue; }
var mod_info = path_ids[path];
var offset = start.sub(mod_info.start).toInt32();
var size = end.sub(start).toInt32();
var mod_id = mod_info.id;
// We're going to create two memory views into the array we alloc'd at
// the start.
// we want one u32 after all the other entries we've created
var x = new Uint32Array(bb, num_entries * entry_sz, 1);
x[0] = offset;
// we want two u16's offset after the 4 byte u32 above
var y = new Uint16Array(bb, num_entries * entry_sz + 4, 2);
y[0] = size;
y[1] = mod_id;
++num_entries;
}
// We can save some space here, rather than sending the entire array back,
// we can create a new view into the already allocated memory, and just
// send back that linear chunk.
return new Uint8Array(bb, 0, num_entries * entry_sz);
}
// Punt on self modifying code -- should improve speed and lighthouse will
// barf on it anyways
Stalker.trustThreshold = 0;
console.log('Starting to stalk threads...');
// Note, we will miss any bbs hit by threads that are created after we've
// attached
Process.enumerateThreads({
onMatch: function (thread) {
if (threadlist.indexOf(thread.id) < 0 &&
threadlist.indexOf('all') < 0) {
// This is not the thread you're look for
return;
}
console.log('Stalking thread ' + thread.id + '.');
Stalker.follow(thread.id, {
events: {
compile: true
},
onReceive: function (event) {
var bb_events = Stalker.parse(event,
{stringify: false, annotate: false});
var bbs = drcov_bbs(bb_events, filtered_maps, module_ids);
// We're going to send a dummy message, the actual bb is in the
// data field. We're sending a dict to keep it consistent with
// the map. We're also creating the drcov event in javascript,
// so on the py recv side we can just blindly add it to a set.
send({bbs: 1}, bbs);
}
});
},
onComplete: function () { console.log('Done stalking threads.'); }
});
"""
# These are global so we can easily access them from the frida callbacks or
# signal handlers. It's important that bbs is a set, as we're going to depend
# on it's uniquing behavior for deduplication
modules = []
bbs = set([])
outfile = 'frida-cov.log'
# This converts the object frida sends which has string addresses into
# a python dict
def populate_modules(image_list):
global modules
for image in image_list:
idx = image['id']
path = image['path']
base = int(image['base'], 0)
end = int(image['end'], 0)
size = image['size']
m = {
'id': idx,
'path': path,
'base': base,
'end': end,
'size': size}
modules.append(m)
print('[+] Got module info.')
# called when we get coverage data from frida
def populate_bbs(data):
global bbs
# we know every drcov block is 8 bytes, so lets just blindly slice and
# insert. This will dedup for us.
block_sz = 8
for i in range(0, len(data), block_sz):
bbs.add(data[i:i+block_sz])
# take the module dict and format it as a drcov logfile header
def create_header(mods):
header = ''
header += 'DRCOV VERSION: 2\n'
header += 'DRCOV FLAVOR: frida\n'
header += 'Module Table: version 2, count %d\n' % len(mods)
header += 'Columns: id, base, end, entry, checksum, timestamp, path\n'
entries = []
for m in mods:
# drcov: id, base, end, entry, checksum, timestamp, path
# frida doesnt give us entry, checksum, or timestamp
# luckily, I don't think we need them.
entry = '%3d, %#016x, %#016x, %#016x, %#08x, %#08x, %s' % (
m['id'], m['base'], m['end'], 0, 0, 0, m['path'])
entries.append(entry)
header_modules = '\n'.join(entries)
return ("%s%s\n" % (header, header_modules)).encode("utf-8")
# take the recv'd basic blocks, finish the header, and append the coverage
def create_coverage(data):
bb_header = b'BB Table: %d bbs\n' % len(data)
return bb_header + b''.join(data)
def on_message(msg, data):
#print(msg)
pay = msg['payload']
if 'map' in pay:
maps = pay['map']
populate_modules(maps)
else:
populate_bbs(data)
def sigint(signo, frame):
print('[!] SIGINT, saving %d blocks to \'%s\'' % (len(bbs), outfile))
save_coverage()
print('[!] Done')
os._exit(1)
def save_coverage():
header = create_header(modules)
body = create_coverage(bbs)
with open(outfile, 'wb') as h:
h.write(header)
h.write(body)
def main():
global outfile
parser = argparse.ArgumentParser()
parser.add_argument('target',
help='target process name or pid',
default='-1')
parser.add_argument('-o', '--outfile',
help='coverage file',
default='frida-cov.log')
parser.add_argument('-w', '--whitelist-modules',
help='module to trace, may be specified multiple times [all]',
action='append', default=[])
parser.add_argument('-t', '--thread-id',
help='threads to trace, may be specified multiple times [all]',
action='append', type=int, default=[])
parser.add_argument('-D', '--device',
help='select a device by id [local]',
default='local')
args = parser.parse_args()
outfile = args.outfile
device = frida.get_device(args.device)
target = -1
for p in device.enumerate_processes():
if args.target in [str(p.pid), p.name]:
if target == -1:
target = p.pid
else:
print('[-] Warning: multiple processes on device match '
'\'%s\', using pid: %d' % (args.target, target))
if target == -1:
print('[-] Error: could not find process matching '
'\'%s\' on device \'%s\'' % (args.target, device.id))
sys.exit(1)
signal.signal(signal.SIGINT, sigint)
whitelist_modules = ['all']
if len(args.whitelist_modules):
whitelist_modules = args.whitelist_modules
threadlist = ['all']
if len(args.thread_id):
threadlist = args.thread_id
json_whitelist_modules = json.dumps(whitelist_modules)
json_threadlist = json.dumps(threadlist)
print('[*] Attaching to pid \'%d\' on device \'%s\'...' %
(target, device.id))
session = device.attach(target)
print('[+] Attached. Loading script...')
script = session.create_script(js % (json_whitelist_modules, json_threadlist))
script.on('message', on_message)
script.load()
print('[*] Now collecting info, control-C or control-D to terminate....')
sys.stdin.read()
print('[*] Detaching, this might take a second...')
session.detach()
print('[+] Detached. Got %d basic blocks.' % len(bbs))
print('[*] Formatting coverage and saving...')
save_coverage()
print('[!] Done')
sys.exit(0)
if __name__ == '__main__':
main()
+310
View File
@@ -0,0 +1,310 @@
#include <iostream>
#include <set>
#include <string>
#include <vector>
#include <utility>
#include <iterator>
#include <algorithm>
#include <cstdio>
#include <cstdarg>
#include <cstdlib>
#include <unordered_set>
#include <unordered_map>
#include "pin.H"
#include "TraceFile.h"
#include "ImageManager.h"
// Pin comes with some old standard libraries.
namespace pintool {
template <typename V>
using unordered_set = std::tr1::unordered_set<V>;
template <typename K, typename V>
using unordered_map = std::tr1::unordered_map<K, V>;
}
// Tool's arguments.
static KNOB<std::string> KnobModuleWhitelist(KNOB_MODE_APPEND, "pintool", "w", "",
"Add a module to the white list. If none is specified, everymodule is white-listed. Example: libTIFF.dylib");
static KNOB<std::string> KnobLogFile(KNOB_MODE_WRITEONCE, "pintool", "l", "trace.log",
"Name of the output file. If none is specified, trace.log is used.");
// Return the file/directory name of a path.
static std::string base_name(const std::string& path)
{
#if defined(TARGET_WINDOWS)
#define PATH_SEPARATOR "\\"
#else
#define PATH_SEPARATOR "/"
#endif
std::string::size_type idx = path.rfind(PATH_SEPARATOR);
std::string name = (idx == std::string::npos) ? path : path.substr(idx + 1);
return name;
}
// Per thread data structure. This is mainly done to avoid locking.
// - Per-thread map of executed basic blocks, and their size.
struct ThreadData {
pintool::unordered_map<ADDRINT, uint16_t> m_blocks;
};
class ToolContext {
public:
ToolContext()
{
PIN_InitLock(&m_loaded_images_lock);
PIN_InitLock(&m_thread_lock);
m_tls_key = PIN_CreateThreadDataKey(nullptr);
}
ThreadData* GetThreadLocalData(THREADID tid)
{
return static_cast<ThreadData*>(PIN_GetThreadData(m_tls_key, tid));
}
void setThreadLocalData(THREADID tid, ThreadData* data)
{
PIN_SetThreadData(m_tls_key, data, tid);
}
// The image manager allows us to keep track of loaded images.
ImageManager* m_images;
// Trace file used to log execution traces.
TraceFile* m_trace;
// Keep track of _all_ the loaded images.
std::vector<LoadedImage> m_loaded_images;
PIN_LOCK m_loaded_images_lock;
// Thread tracking utilities.
std::set<THREADID> m_seen_threads;
std::vector<ThreadData*> m_terminated_threads;
PIN_LOCK m_thread_lock;
// Flag that indicates that tracing is enabled. Always true if there are no whitelisted images.
bool m_tracing_enabled = true;
// TLS key used to store per-thread data.
TLS_KEY m_tls_key;
};
// Thread creation event handler.
static VOID OnThreadStart(THREADID tid, CONTEXT* ctxt, INT32 flags, VOID* v)
{
// Create a new `ThreadData` object and set it on the TLS.
auto& context = *reinterpret_cast<ToolContext*>(v);
context.setThreadLocalData(tid, new ThreadData);
// Save the recently created thread.
PIN_GetLock(&context.m_thread_lock, 1);
{
context.m_seen_threads.insert(tid);
}
PIN_ReleaseLock(&context.m_thread_lock);
}
// Thread destruction event handler.
static VOID OnThreadFini(THREADID tid, const CONTEXT* ctxt, INT32 c, VOID* v)
{
// Get thread's `ThreadData` structure.
auto& context = *reinterpret_cast<ToolContext*>(v);
ThreadData* data = context.GetThreadLocalData(tid);
// Remove the thread from the seen threads set and add it to the terminated list.
PIN_GetLock(&context.m_thread_lock, 1);
{
context.m_seen_threads.erase(tid);
context.m_terminated_threads.push_back(data);
}
PIN_ReleaseLock(&context.m_thread_lock);
}
// Image load event handler.
static VOID OnImageLoad(IMG img, VOID* v)
{
auto& context = *reinterpret_cast<ToolContext*>(v);
std::string img_name = base_name(IMG_Name(img));
ADDRINT low = IMG_LowAddress(img);
ADDRINT high = IMG_HighAddress(img);
printf("Loaded image: %p:%p -> %s\n", (void *)low, (void *)high, img_name.c_str());
// Save the loaded image with its original full name/path.
PIN_GetLock(&context.m_loaded_images_lock, 1);
{
context.m_loaded_images.push_back(LoadedImage(IMG_Name(img), low, high));
}
PIN_ReleaseLock(&context.m_loaded_images_lock);
// If the image is whitelisted save its information.
if (context.m_images->isWhiteListed(img_name)) {
context.m_images->addImage(img_name, low, high);
// Enable tracing if not already enabled.
if (!context.m_tracing_enabled)
context.m_tracing_enabled = true;
}
}
// Image unload event handler.
static VOID OnImageUnload(IMG img, VOID* v)
{
auto& context = *reinterpret_cast<ToolContext*>(v);
context.m_images->removeImage(IMG_LowAddress(img));
}
// Basic block hit event handler.
static VOID PIN_FAST_ANALYSIS_CALL OnBasicBlockHit(THREADID tid, ADDRINT addr, UINT32 size, VOID* v)
{
auto& context = *reinterpret_cast<ToolContext*>(v);
ThreadData* data = context.GetThreadLocalData(tid);
data->m_blocks[addr] = size;
PIN_RemoveInstrumentationInRange(addr, addr);
}
// Trace hit event handler.
static VOID OnTrace(TRACE trace, VOID* v)
{
auto& context = *reinterpret_cast<ToolContext*>(v);
// Check if the address is inside a white-listed image.
if (!context.m_tracing_enabled || !context.m_images->isInterestingAddress(TRACE_Address(trace)))
return;
auto tid = PIN_ThreadId();
ThreadData* data = context.GetThreadLocalData(tid);
// This trace is getting JIT'd, which implies the head must get executed.
auto bbl = TRACE_BblHead(trace);
auto addr = BBL_Address(bbl);
data->m_blocks[addr] = (uint16_t)BBL_Size(bbl);
// For each basic block in the trace...
for (bbl = BBL_Next(bbl); BBL_Valid(bbl); bbl = BBL_Next(bbl))
{
// Ignore blocks that have already been marked as executed in the past...
ADDRINT addr = BBL_Address(bbl);
if (data->m_blocks.find(addr) != data->m_blocks.end())
continue;
// Instrument blocks that have not yet been executed (at least... by this thread).
BBL_InsertCall(bbl, IPOINT_ANYWHERE, (AFUNPTR)OnBasicBlockHit,
IARG_FAST_ANALYSIS_CALL,
IARG_THREAD_ID,
IARG_ADDRINT, addr,
IARG_UINT32, BBL_Size(bbl),
IARG_PTR, v,
IARG_END);
}
}
// Program finish event handler.
static VOID OnFini(INT32 code, VOID* v)
{
auto& context = *reinterpret_cast<ToolContext*>(v);
context.m_trace->write_string("DRCOV VERSION: 2\n");
context.m_trace->write_string("DRCOV FLAVOR: drcov\n");
context.m_trace->write_string("Module Table: version 2, count %u\n", context.m_loaded_images.size());
context.m_trace->write_string("Columns: id, base, end, entry, checksum, timestamp, path\n");
// We don't supply entry, checksum and, timestamp.
for (unsigned i = 0; i < context.m_loaded_images.size(); i++) {
const auto& image = context.m_loaded_images[i];
context.m_trace->write_string("%2u, %p, %p, 0x0000000000000000, 0x00000000, 0x00000000, %s\n",
i, (void *)image.low_, (void *)image.high_, image.name_.c_str());
}
// Add non terminated threads to the list of terminated threads.
for (THREADID i : context.m_seen_threads) {
ThreadData* data = context.GetThreadLocalData(i);
context.m_terminated_threads.push_back(data);
}
// Count the global number of basic blocks.
size_t number_of_bbs = 0;
for (const auto& data : context.m_terminated_threads) {
number_of_bbs += data->m_blocks.size();
}
context.m_trace->write_string("BB Table: %u bbs\n", number_of_bbs);
struct __attribute__((packed)) drcov_bb {
uint32_t start;
uint16_t size;
uint16_t id;
};
drcov_bb tmp;
for (const auto& data : context.m_terminated_threads) {
for (const auto& block : data->m_blocks) {
auto address = block.first;
auto it = std::find_if(context.m_loaded_images.begin(), context.m_loaded_images.end(), [&address](const LoadedImage& image) {
return address >= image.low_ && address < image.high_;
});
if (it == context.m_loaded_images.end())
continue;
tmp.id = (uint16_t)std::distance(context.m_loaded_images.begin(), it);
tmp.start = (uint32_t)(address - it->low_);
tmp.size = data->m_blocks[address];
context.m_trace->write_binary(&tmp, sizeof(tmp));
}
}
}
int main(int argc, char* argv[])
{
std::cout << "CodeCoverage tool by Agustin Gianni (agustingianni@gmail.com)" << std::endl;
// Initialize symbol processing
PIN_InitSymbols();
// Initialize PIN.
if (PIN_Init(argc, argv)) {
std::cerr << "Error initializing PIN, PIN_Init failed!" << std::endl;
return -1;
}
// Initialize the tool context.
ToolContext *context = new ToolContext();
// Create a an image manager that keeps track of the loaded/unloaded images.
context->m_images = new ImageManager();
for (unsigned i = 0; i < KnobModuleWhitelist.NumberOfValues(); ++i) {
std::cout << "White-listing image: " << KnobModuleWhitelist.Value(i) << std::endl;
context->m_images->addWhiteListedImage(KnobModuleWhitelist.Value(i));
// We will only enable tracing when any of the whitelisted images gets loaded.
context->m_tracing_enabled = false;
}
// Create a trace file.
std::cout << "Logging code coverage information to: " << KnobLogFile.ValueString() << std::endl;
context->m_trace = new TraceFile(KnobLogFile.ValueString());
// Handlers for thread creation and destruction.
PIN_AddThreadStartFunction(OnThreadStart, context);
PIN_AddThreadFiniFunction(OnThreadFini, context);
// Handlers for image loading and unloading.
IMG_AddInstrumentFunction(OnImageLoad, context);
IMG_AddUnloadFunction(OnImageUnload, context);
// Handlers for instrumentation events.
TRACE_AddInstrumentFunction(OnTrace, context);
// Handler for program exits.
PIN_AddFiniFunction(OnFini, context);
PIN_StartProgram();
return 0;
}
+75
View File
@@ -0,0 +1,75 @@
#include "ImageManager.h"
#include "pin.H"
ImageManager::ImageManager()
{
PIN_RWMutexInit(&images_lock);
}
ImageManager::~ImageManager()
{
PIN_RWMutexFini(&images_lock);
}
VOID ImageManager::addImage(std::string image_name, ADDRINT lo_addr,
ADDRINT hi_addr)
{
PIN_RWMutexWriteLock(&images_lock);
{
images.insert(LoadedImage(image_name, lo_addr, hi_addr));
}
PIN_RWMutexUnlock(&images_lock);
}
VOID ImageManager::removeImage(ADDRINT low)
{
PIN_RWMutexWriteLock(&images_lock);
{
std::set<LoadedImage>::iterator i = images.find(LoadedImage("", low));
if (i != images.end()) {
LoadedImage li = *i;
images.erase(i);
}
}
PIN_RWMutexUnlock(&images_lock);
}
VOID ImageManager::addWhiteListedImage(const std::string& image_name)
{
whitelist.insert(image_name);
}
BOOL ImageManager::isWhiteListed(const std::string& image_name)
{
return whitelist.find(image_name) != whitelist.end();
}
// Checks if the given address falls inside one of the white-listed images we are
// tracing.
BOOL ImageManager::isInterestingAddress(ADDRINT addr)
{
PIN_RWMutexReadLock(&images_lock);
{
// If there is no white-listed image, everything is white-listed.
if (images.empty() || (addr >= m_cached_low && addr < m_cached_high)) {
PIN_RWMutexUnlock(&images_lock);
return true;
}
auto i = images.upper_bound(LoadedImage("", addr));
--i;
// If the instruction address does not fall inside a valid white listed image, bail out.
if (!(i != images.end() && i->low_ <= addr && addr < i->high_)) {
PIN_RWMutexUnlock(&images_lock);
return false;
}
// Save the matched image.
m_cached_low = i->low_;
m_cached_high = i->high_;
}
PIN_RWMutexUnlock(&images_lock);
return true;
}
+54
View File
@@ -0,0 +1,54 @@
#ifndef IMAGEMANAGER_H_
#define IMAGEMANAGER_H_
#include <string>
#include <set>
#include "pin.H"
struct LoadedImage {
std::string name_;
ADDRINT low_;
ADDRINT high_;
LoadedImage(const std::string& n = "", ADDRINT low = 0, ADDRINT high = 0)
: name_(n)
, low_(low)
, high_(high)
{
}
// Overloaded method to implement searches over the loaded images list
// and also allow this class to be used on a set like STL container.
bool operator<(const LoadedImage& rhs) const
{
return low_ < rhs.low_;
}
};
class ImageManager {
private:
// Set of module names that are allowed to be traced.
std::set<LoadedImage> images;
PIN_RWMUTEX images_lock;
// Here we store the names of the images inside our white list.
std::set<std::string> whitelist;
// Store the last recently matched image so we can use it as a cache.
ADDRINT m_cached_low;
ADDRINT m_cached_high;
public:
ImageManager();
virtual ~ImageManager();
VOID addWhiteListedImage(const std::string& image_name);
BOOL isWhiteListed(const std::string& image_name);
BOOL isInterestingAddress(ADDRINT addr);
VOID addImage(std::string image_name, ADDRINT lo_add, ADDRINT hi_addr);
VOID removeImage(ADDRINT low);
};
#endif /* IMAGEMANAGER_H_ */
+21
View File
@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2017 Agustin Gianni
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
+10
View File
@@ -0,0 +1,10 @@
CONFIG_ROOT := $(PIN_ROOT)/source/tools/Config
include $(CONFIG_ROOT)/makefile.config
TOOL_CXXFLAGS += -std=c++11 -Wno-format
TOOL_ROOTS := CodeCoverage
$(OBJDIR)CodeCoverage$(PINTOOL_SUFFIX): $(OBJDIR)CodeCoverage$(OBJ_SUFFIX) $(OBJDIR)ImageManager$(OBJ_SUFFIX)
$(LINKER) $(TOOL_LDFLAGS) $(LINK_EXE)$@ $^ $(TOOL_LPATHS) $(TOOL_LIBS)
include $(TOOLS_ROOT)/Config/makefile.default.rules
+132
View File
@@ -0,0 +1,132 @@
# CodeCoverage Pintool
The `CodeCoverage` pintool runs ontop of the [Intel Pin](https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool) DBI framework and collects code coverage data in a log format compatible with [Lighthouse](https://github.com/gaasedelen/lighthouse). The log produced by this pintool emulates that of [drcov](http://dynamorio.org/docs/page_drcov.html) as shipped with [DynamoRIO](http://www.dynamorio.org).
This pintool is labeled only as a prototype.
# Compilation
To compile the pintool, you first will need to [download](https://software.intel.com/en-us/articles/pin-a-binary-instrumentation-tool-downloads) and extract Pin.
Follow the build instructions below for your respective platform.
## Building for MacOS or Linux
On MacOS or Linux, one can compile the pintool using the following commands.
```
# Location of this repo / pintool source
cd ~/lighthouse/coverage/pin
# Location where you extracted Pin
export PIN_ROOT=~/pin
export PATH=$PATH:$PIN_ROOT
make
make TARGET=ia32
```
The resulting binaries will be placed inside a directory whose name depends on the arch/platform/build type.
* obj-intel32/CodeCoverage.[so|dylib]
* obj-intel64/CodeCoverage.[so|dylib]
## Building for Windows
To compile the Windows pintool, you must have at least Visual Studio 2015 installed.
Launch a command prompt and build the pintool with the following commands.
### 32bit Pintool
```
REM If you are on VS 2022 or so you can run this line:
"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" x86
REM VS 2015 or so you can run this line instead:
REM "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x86
REM Location of this repo / pintool source
cd C:\Users\user\lighthouse\coverage\pin
REM Location where you extracted Pin
set PIN_ROOT=C:\pin
set PATH=%PATH%;%PIN_ROOT%
build-x86.bat
```
### 64bit Pintool
```
REM If you are on VS 2022 or so you can run this line:
"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" x86_amd64
REM VS 2015 or so you can run this line instead:
REM "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x86_amd64
REM Location of this repo / pintool source
cd C:\Users\user\lighthouse\coverage\pin
REM Location where you extracted Pin
set PIN_ROOT=C:\pin
set PATH=%PATH%;%PIN_ROOT%
build-x64.bat
```
The resulting binaries will be labeled based on their architecture (eg, 64 is the 64bit pintool).
* CodeCoverage.dll
* CodeCoverage64.dll
Compiling a pintool on Windows can be more arduous. Because of this, we have provided compiled binaries for Windows on the [releases](https://github.com/gaasedelen/lighthouse/releases) page. Please be sure to use the pintool that matches your version of Pin.
# Usage
Once compiled, usage of the pintool is straightforward. Simply provide the compiled `CodeCoverage` pintool to `pin` via the `-t` argument. The resulting code coverage data will be written to the file `trace.log` at the end of execution.
Here is an example of us instrumenting a 64bit binary called `test` with our `CodeCoverage` pintool.
```
$ pin -t obj-intel64/CodeCoverage.dylib -- ./test
CodeCoverage tool by Agustin Gianni (agustingianni@gmail.com)
White-listed images not specified, instrumenting every module by default.
Logging code coverage information to: trace.log
Loaded image: 0x0000000101bf1000:0x0000000101bf1fff -> test
Loaded image: 0x00007fff6d167000:0x00007fff6d1dafff -> dyld
Loaded image: 0x00007fff94b07000:0x00007fff94b5afff -> libc++.1.dylib
Loaded image: 0x00007fff942fa000:0x00007fff942fbfff -> libSystem.B.dylib
Loaded image: 0x00007fff8bf30000:0x00007fff8bf59fff -> libc++abi.dylib
Loaded image: 0x00007fff875ac000:0x00007fff875b0fff -> libcache.dylib
$ ll trace.log
-rw------- 1 anon staff 113K Apr 28 00:57 trace.log
```
## Module Whitelisting
Using the `-w` command line flag, the pintool can be instructed to instrument only the modules you specify.
Here we run the pintool from the command line and specify that only the `test` image should be instrumented. This improves performance and drastically reduces the amount of data collected by ignoring the execution of shared libraries such as `libc`.
```
$ pin -t obj-intel64/CodeCoverage.dylib -w test -- ./test
CodeCoverage tool by Agustin Gianni (agustingianni@gmail.com)
White-listing image: test
Logging code coverage information to: trace.log
Loaded image: 0x000000010a1df000:0x000000010a1dffff -> test
Loaded image: 0x00007fff65a5c000:0x00007fff65acffff -> dyld
Loaded image: 0x00007fff94b07000:0x00007fff94b5afff -> libc++.1.dylib
Loaded image: 0x00007fff942fa000:0x00007fff942fbfff -> libSystem.B.dylib
Loaded image: 0x00007fff8bf30000:0x00007fff8bf59fff -> libc++abi.dylib
Loaded image: 0x00007fff875ac000:0x00007fff875b0fff -> libcache.dylib
$ ll trace.log
-rw------- 1 anon staff 3.1K Apr 28 01:01 trace.log
```
You can specify as many white-listed images as you want by adding several `-w` arguments to the pintool.
If no `-w` arguments are supplied, the pintool will trace all loaded images.
# Authors
* Agustin Gianni ([@agustingianni](https://twitter.com/agustingianni))
+51
View File
@@ -0,0 +1,51 @@
#ifndef TRACEFILE_H_
#define TRACEFILE_H_
#include <string>
#include <cstdio>
#include <cstdlib>
#include <iostream>
class TraceFile {
public:
TraceFile(const std::string& filename)
{
m_file = fopen(filename.c_str(), "w+");
if (!m_file) {
std::cerr << "Could not open the log file." << std::endl;
std::abort();
}
}
~TraceFile()
{
if (fclose(m_file) != 0) {
std::cerr << "Could not close the log file." << std::endl;
std::abort();
}
}
void write_binary(const void* ptr, size_t size)
{
if (fwrite(ptr, size, 1, m_file) != 1) {
std::cerr << "Could not log to the log file." << std::endl;
std::abort();
}
}
void write_string(const char* format, ...)
{
va_list args;
va_start(args, format);
if (vfprintf(m_file, format, args) < 0) {
std::cerr << "Could not log to the log file." << std::endl;
std::abort();
}
va_end(args);
}
private:
FILE* m_file;
};
#endif /* TRACEFILE_H_ */
+49
View File
@@ -0,0 +1,49 @@
@echo off
cls
cl ^
/c /Fo /nologo /EHa- /EHs- /GR- /GS- /Gd /Gm- /Gy /MD /O2 /Oi- /Oy- /TP /W3 /WX- /Zc:forScope /Zc:inline /Zc:wchar_t /wd4316 /wd4530 /fp:strict ^
/DTARGET_IA32E /DHOST_IA32E /DTARGET_WINDOWS /DWIN32 /D__PIN__=1 /DPIN_CRT=1 /D_STLP_IMPORT_IOSTREAMS /D__LP64__ ^
/I"%PIN_ROOT%\extras\xed-intel64\include\xed" ^
/I%PIN_ROOT%\source\include\pin ^
/I%PIN_ROOT%\source\include\pin\gen ^
/I%PIN_ROOT%\source\tools\InstLib ^
/I%PIN_ROOT%\extras\components\include ^
/I%PIN_ROOT%\extras\stlport\include ^
/I%PIN_ROOT%\extras ^
/I%PIN_ROOT%\extras\libstdc++\include ^
/I%PIN_ROOT%\extras\crt\include ^
/I%PIN_ROOT%\extras\crt ^
/I"%PIN_ROOT%\extras\crt\include\arch-x86_64" ^
/I%PIN_ROOT%\extras\crt\include\kernel\uapi ^
/I"%PIN_ROOT%\extras\crt\include\kernel\uapi\asm-x86" ^
/FIinclude/msvc_compat.h CodeCoverage.cpp ImageManager.cpp ImageManager.h TraceFile.h
link ^
/ERRORREPORT:QUEUE ^
/OUT:CodeCoverage64.dll ^
/INCREMENTAL:NO ^
/NOLOGO ^
/LIBPATH:%PIN_ROOT%\intel64\lib ^
/LIBPATH:"%PIN_ROOT%\intel64\lib-ext" ^
/LIBPATH:"%PIN_ROOT%\extras\xed-intel64\lib" ^
/LIBPATH:%PIN_ROOT%\intel64\runtime\pincrt pin.lib xed.lib pinipc.lib pincrt.lib kernel32.lib crtbeginS.obj ^
/NODEFAULTLIB ^
/MANIFEST:NO ^
/OPT:NOREF ^
/TLBID:1 ^
/ENTRY:"Ptrace_DllMainCRTStartup" ^
/BASE:"0xC5000000" ^
/DYNAMICBASE ^
/NXCOMPAT ^
/IMPLIB:CodeCoverage.lib ^
/MACHINE:X64 ^
/SAFESEH:NO ^
/export:main ^
/ignore:4049 ^
/ignore:4210 ^
/ignore:4217 ^
/ignore:4281 ^
/DLL CodeCoverage.obj ImageManager.obj
del *.obj *.pdb *.exp *.lib
+50
View File
@@ -0,0 +1,50 @@
@echo off
cls
cl ^
/c /Fo /nologo /EHa- /EHs- /GR- /GS- /Gd /Gm- /Gy /MD /O2 /Oi- /Oy- /TP /W3 /WX- /Zc:forScope /Zc:inline /Zc:wchar_t /wd4316 /wd4530 /fp:precise ^
/DTARGET_IA32 /DHOST_IA32 /DTARGET_WINDOWS /DWIN32 /D__PIN__=1 /DPIN_CRT=1 /D_STLP_IMPORT_IOSTREAMS /D__i386__ ^
/I"%PIN_ROOT%\extras\xed-ia32\include\xed" ^
/I%PIN_ROOT%\source\include\pin ^
/I%PIN_ROOT%\source\include\pin\gen ^
/I%PIN_ROOT%\source\tools\InstLib ^
/I"%PIN_ROOT%\extras\xed-ia32\include" ^
/I%PIN_ROOT%\extras\components\include ^
/I%PIN_ROOT%\extras\stlport\include ^
/I%PIN_ROOT%\extras ^
/I%PIN_ROOT%\extras\libstdc++\include ^
/I%PIN_ROOT%\extras\crt\include ^
/I%PIN_ROOT%\extras\crt ^
/I"%PIN_ROOT%\extras\crt\include\arch-x86" ^
/I%PIN_ROOT%\extras\crt\include\kernel\uapi ^
/I"%PIN_ROOT%\extras\crt\include\kernel\uapi\asm-x86" ^
/FIinclude/msvc_compat.h CodeCoverage.cpp ImageManager.cpp ImageManager.h TraceFile.h
link ^
/ERRORREPORT:QUEUE ^
/OUT:CodeCoverage.dll ^
/INCREMENTAL:NO ^
/NOLOGO ^
/LIBPATH:%PIN_ROOT%\ia32\lib ^
/LIBPATH:"%PIN_ROOT%\ia32\lib-ext" ^
/LIBPATH:"%PIN_ROOT%\extras\xed-ia32\lib" ^
/LIBPATH:%PIN_ROOT%\ia32\runtime\pincrt pin.lib xed.lib pinipc.lib pincrt.lib kernel32.lib crtbeginS.obj ^
/NODEFAULTLIB ^
/MANIFEST:NO ^
/OPT:NOREF ^
/TLBID:1 ^
/ENTRY:"Ptrace_DllMainCRTStartup@12" ^
/BASE:"0x55000000" ^
/DYNAMICBASE ^
/NXCOMPAT ^
/IMPLIB:CodeCoverage.lib ^
/MACHINE:X86 ^
/SAFESEH:NO ^
/export:main ^
/ignore:4049 ^
/ignore:4210 ^
/ignore:4217 ^
/ignore:4281 ^
/DLL CodeCoverage.obj ImageManager.obj
del *.obj *.pdb *.exp *.lib
+9
View File
@@ -0,0 +1,9 @@
REM - Clean up an existing or past 'test session'
taskkill /F /IM "binaryninja.exe"
timeout 1
REM del "..\..\testcase\*.id0"
REM del "..\..\testcase\*.id1"
REM del "..\..\testcase\*.id2"
REM del "..\..\testcase\*.nam"
REM del "..\..\testcase\*.til"
REM del "..\..\testcase\*.$$$"
+2
View File
@@ -1,4 +1,6 @@
REM - Clean up an existing or past 'test session'
taskkill /F /IM "ida.exe"
taskkill /F /IM "ida64.exe"
taskkill /F /IM "idaq.exe"
taskkill /F /IM "idaq64.exe"
timeout 1
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_BINJA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\BinaryNinja\binaryninja.exe" "..\..\testcase\boombox.bndb"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_BINJA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\BinaryNinja\binaryninja.exe" "..\..\testcase\ida74\ida64.bndb"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_BINJA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\BinaryNinja\binaryninja.exe"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of Binja
call close_BINJA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\.#lighthouse_plugin.py"
REM - Launch a new Binja session
start "" "C:\tools\disassemblers\BinaryNinja_Personal\binaryninja.exe" "..\..\testcase\boombox.bndb"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\IDA 7.0\ida64.exe" "..\..\testcase\boombox7.i64"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\IDA 7.1\ida.exe" "..\..\testcase\idaq71.idb"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\IDA 7.4\ida64.exe" "..\..\testcase\boombox74.i64"
+19
View File
@@ -0,0 +1,19 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\IDA 7.4\ida64.exe" "..\..\testcase\ida74\ida64.exe.i64"
REM start "" "C:\tools\disassemblers\IDA 7.4\ida64.exe" "C:\Users\user\Desktop\JavaScriptCore_13.4.i64"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
"C:\tools\disassemblers\IDA 7.0\ida64.exe" "-B" "..\..\testcase\boombox.exe"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\IDA 7.0\ida.exe" "..\..\testcase\idaq7.idb"
-17
View File
@@ -1,17 +0,0 @@
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\tools\disassemblers\IDA 6.8\plugins\*lighthouse_plugin.py"
rmdir "C:\tools\disassemblers\IDA 6.8\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\tools\disassemblers\IDA 6.8\plugins\"
del /F /Q "C:\tools\disassemblers\IDA 6.8\plugins\.#lighthouse_plugin.py"
REM - Relaunch two IDA sessions
start "" "C:\tools\disassemblers\IDA 6.8\idaq64.exe" "..\..\testcase\boombox.i64"
-17
View File
@@ -1,17 +0,0 @@
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\tools\disassemblers\IDA 6.95\plugins\*lighthouse_plugin.py"
rmdir "C:\tools\disassemblers\IDA 6.95\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\tools\disassemblers\IDA 6.95\plugins\"
del /F /Q "C:\tools\disassemblers\IDA 6.95\plugins\.#lighthouse_plugin.py"
REM - Relaunch two IDA sessions
start "" "C:\tools\disassemblers\IDA 6.95\idaq64.exe" "..\..\testcase\boombox95.i64"
+14
View File
@@ -0,0 +1,14 @@
import os
import binaryninja
target = os.path.abspath(os.path.join(
os.path.dirname(os.path.abspath(__file__)),
"..",
"testcase",
"boombox.exe"
))
print "Opening '%s' headlessly..." % target
x = binaryninja.BinaryViewType["PE"].open(target)
x.update_analysis_and_wait()
print "DONE!"
-412
View File
@@ -1,412 +0,0 @@
import logging
import collections
import idaapi
import idautils
from lighthouse.util import compute_color_on_gradiant, FlowChartCache
logger = logging.getLogger("Lighthouse.Coverage")
#------------------------------------------------------------------------------
# Database Level Coverage
#------------------------------------------------------------------------------
class DatabaseCoverage(object):
"""
Manage coverage data and metrics for the whole database.
TODO/NOTE:
In the long run, I imagine this class will grow to become
the hub for all coverage data. By the time the coverage reaches
this hub, it should be in a generic (offset, size) block format.
This hub will be the data source should a user wish to flip
between any loaded coverage, or even view metrics on a union of
the loaded overages.
As the class sits now, it is minimal and caters to only a single
source of coverage data.
"""
def __init__(self):
self.coverage_data = None
self.functions = {}
self.orphans = []
def add_coverage(self, base, coverage_data):
"""
Enlighten the database to new coverage data.
"""
self.coverage_data = bake_coverage_addresses(base, coverage_data)
self.functions, self.orphans = build_function_coverage(self.coverage_data)
def finalize(self, palette):
"""
Finalize coverage data.
"""
for function in self.functions.itervalues():
function.finalize(palette)
#------------------------------------------------------------------------------
# Function Level Coverage
#------------------------------------------------------------------------------
class FunctionCoverage(object):
"""
Manages coverage data at the function level.
This wraps basic function metadata (address, name, # of nodes, etc)
and provides access/metrics to coverage data at a function level.
"""
def __init__(self, flowchart, name=None):
# function metadata
self.name = name
self.address = flowchart.bounds.startEA
self.size = 0
# node metadata
self.nodes = {}
self.exec_nodes = set()
# baked metrics
self.insn_count = 0
self.node_count = 0
self.exec_insn_count = 0
self.exec_node_count = 0
# baked colors
self.coverage_color = 0
self.profiling_color = 0
# automatically fill the fields we were not passed
self._self_populate(flowchart)
@property
def instructions(self):
"""
The number of instructions in this function.
"""
return sum(node.instructions for node in self.nodes.itervalues())
@property
def executed_instructions(self):
"""
The number of executed instructions in this function.
"""
return sum(node.instructions for node in self.exec_nodes)
@property
def percent_instruction(self):
"""
The function coverage percentage by instruction execution.
"""
try:
return (float(self.executed_instructions) / self.instructions)
except ZeroDivisionError:
return 0
@property
def percent_node(self):
"""
The function coverage percentage by node (basic block) execution.
"""
try:
return (float(len(self.exec_nodes)) / self.node_count)
except ZeroDivisionError:
return 0
#----------------------------------------------------------------------
# Information Population
#----------------------------------------------------------------------
def _self_populate(self, flowchart):
"""
Populate the function fields against the open IDB.
"""
# get the function name from the database
if not self.name:
self.name = idaapi.get_func_name2(self.address)
# get the function's nodes from the database
if not self.node_count:
self._self_populate_nodes(flowchart)
def _self_populate_nodes(self, flowchart):
"""
Populate the function nodes against the open IDB.
"""
assert self.size == 0
#
# iterate through every node (basic block) in the flowchart for a given
# function so that we may initialize a NodeEA --> NodeCoverage map
#
for node_id in xrange(flowchart.size()):
# first, create a new node coverage item for this node
new_node = NodeCoverage(flowchart[node_id], node_id)
# add the node's byte size to our computed function size
self.size += new_node.size
# save the node coverage item into our function's node map
self.nodes[new_node.address] = new_node
# bake the total node count in so we don't re-compute it repeatedly
self.node_count = flowchart.size()
#----------------------------------------------------------------------
# Controls
#----------------------------------------------------------------------
def mark_node(self, start_address):
"""
Add the given node ID to the set of tainted nodes.
"""
self.exec_nodes.add(self.nodes[start_address])
def finalize(self, palette):
"""
Finalize the coverage metrics for faster access.
"""
# bake metrics
self.insn_count = self.instructions
self.node_count = len(self.nodes)
self.exec_insn_count = self.executed_instructions
self.exec_node_count = len(self.exec_nodes)
self.insn_percent = self.percent_instruction
self.node_percent = self.percent_node
# bake colors
self.coverage_color = compute_color_on_gradiant(
self.insn_percent,
palette.coverage_bad,
palette.coverage_good
)
# TODO
#self.profiling_color = None
#------------------------------------------------------------------------------
# Node Level Coverage
#------------------------------------------------------------------------------
class NodeCoverage(object):
"""
Manages coverage data at the node (basic block) level.
"""
def __init__(self, node, node_id):
self.address = node.startEA
self.size = node.endEA - node.startEA
self.id = node_id
self.instructions = 0
# loop through the node's entire range and count its instructions
current_address = self.address
while node.endEA > current_address:
self.instructions += 1
current_address = idaapi.next_not_tail(current_address)
#------------------------------------------------------------------------------
# Coverage Helpers
#------------------------------------------------------------------------------
def bake_coverage_addresses(base, coverage_blocks):
"""
Bake relative coverage offsets into absolute addresses, in-place.
"""
for i in xrange(len(coverage_blocks)):
offset, size = coverage_blocks[i]
coverage_blocks[i] = (base + offset, size)
return coverage_blocks
def init_function_converage():
"""
Build a clean function map ready to populate with future coverage.
"""
functions = {}
for function_address in idautils.Functions():
function = idaapi.get_func(function_address)
flowchart = idaapi.qflow_chart_t("", function, idaapi.BADADDR, idaapi.BADADDR, 0)
functions[function_address] = FunctionCoverage(flowchart)
return functions
def build_function_coverage(coverage_blocks):
"""
Map block based coverage data to database defined basic blocks (nodes).
-----------------------------------------------------------------------
NOTE:
I don't like writing overly large / complex functions. But this
will be an important high compute + IDB access point for larger
data sets.
I put some effort into reducing database access, excessive
searches, iterations, instantiations, etc. I am concerned about
performance overhead that may come with trying to break this out
into multiple functions, but I encourage you to try :-)
-----------------------------------------------------------------------
Input:
+- coverage_blocks:
| a list of tuples in (offset, size) format that define coverage
'
-----------------------------------------------------------------------
Output:
+- function_map:
| a map keyed with a function address and holds function coverage
|
| eg: { functionEA: FunctionCoverage(...) }
|
+- orphans:
| a list of tuples (offset, size) of coverage fragments that could
| not be mapped into any defined functions / nodes
|
| eg: [(offset, size), ...]
'
"""
function_map, orphans = {}, []
# TODO
FLOWCHART_CACHE_SIZE = 6
flowchart_cache = FlowChartCache(FLOWCHART_CACHE_SIZE)
#
# The purpose of this mega while loop is to process the raw block
# based coverage data and build a comprehensive mapping of nodes
# throughout the database that are tainted by it.
#
blocks = collections.deque(coverage_blocks)
while blocks:
# pop off the next coverage block
address, size = blocks.popleft()
# retrieve the flowchart for this address
try:
flowchart, cached_base = flowchart_cache.get(address)
# failed to locate flowchart for this address. the address likely
# does not fall inside of a defined function
except Exception as e:
orphans.append((address, size))
continue
# alias the function's address from the flowchart for convenience
function_address = flowchart.bounds.startEA
#
# At this point, we have located the flowchart corresponding to
# this address. We are now ready to identify which node our
# current coverage block (address, size) starts in.
#
#
# walk through every node (basic block) in the flowchart until a
# a node corresponding with our coverage block is found
#
flowchart_size = flowchart.size()
for count in xrange(flowchart_size):
# get the last basic block we started on
index = (cached_base + count) % flowchart_size
bb = flowchart[index]
# the coverage block (address) starts in this node
if bb.startEA <= address < bb.endEA:
#
# first, retrieve the coverage data item for the function
# corresponding with this flowchart.
#
try:
function_coverage = function_map[function_address]
#
# looks like this is the first time we have identiied
# coverage for this function. creaate a coverage data item
# for the function now and use that
#
except KeyError as e:
function_coverage = FunctionCoverage(flowchart)
function_map[function_address] = function_coverage
#
# now we taint the basic block that we hit
#
function_map[function_address].mark_node(bb.startEA)
#
# depending on coverage & bb quality, we also check for
# the possibility of a fragment due to the coverage block
# spilling into the next basic block.
#
# does the coverage block spill past this basic block?
end_address = address + size
if end_address > bb.endEA:
# yes, compute the fragment size and prepend the work
# to be consumed later (next iteration, technically)
fragment_address = bb.endEA
fragment_size = end_address - bb.endEA
blocks.appendleft((fragment_address, fragment_size))
# update the flowchart cache
flowchart_cache.set((flowchart, index))
# all done, break from the bb for loop
break
# end of if statement
# end of for loop
#
# We made it through the entire flowchart for this function without
# finding an appropriate basic block (node) for the coverage data.
# this is strange, but whatever... just log the fragment as an
# orphan for later investigation.
#
else:
orphans.append((address, size))
# end of while loop
#
# We are done processing the coverage data given to us. Now we
# enumerate and initialize all the functions that had no coverage.
#
# NOTE: linear sweep, no reason to use the flowcache here
for function_address in idautils.Functions():
if function_address not in function_map:
function = idaapi.get_func(function_address)
flowchart = idaapi.qflow_chart_t("", function, idaapi.BADADDR, idaapi.BADADDR, 0)
function_map[function_address] = FunctionCoverage(flowchart)
# done, return results
return (function_map, orphans)
-165
View File
@@ -1,165 +0,0 @@
import logging
import idaapi
from lighthouse.util.ida import *
logger = logging.getLogger("Lighthouse.Paint")
#------------------------------------------------------------------------------
# Painting
#------------------------------------------------------------------------------
def paint_coverage(coverage, color):
"""
Paint the database using the given coverage.
"""
# paint individual instructions
paint_instructions(coverage.coverage_data, color)
# paint nodes in function graphs
paint_nodes(coverage.functions, color)
# NOTE: We paint hexrays on-request
#------------------------------------------------------------------------------
# Painting - Instructions / Items (Lines)
#------------------------------------------------------------------------------
def paint_instructions(coverage_blocks, color):
"""
Paint instructions using the given coverage blocks.
"""
for address, size in coverage_blocks:
color_items(address, size, color)
def color_items(address, size, color):
"""
Color a region of bytes as specified by address and size.
"""
# loop through the entire region (address -> address+size) coloring lines
while size > 0:
# color the current item
idaapi.set_item_color(address, color)
# move forward to the next item
next_address = idaapi.next_not_tail(address)
size -= next_address - address
address = next_address
# done
#------------------------------------------------------------------------------
# Painting - Nodes (Basic Blocks)
#------------------------------------------------------------------------------
def paint_nodes(functions, color):
"""
Paint function graph nodes using the given function coverages.
"""
for function_coverage in functions.itervalues():
color_nodes(function_coverage.address, function_coverage.exec_nodes, color)
def color_nodes(function_address, nodes, color):
"""
Color a list of nodes within the function graph at function_address.
"""
# create node info object with specified color
node_info = idaapi.node_info_t()
node_info.bg_color = color
# paint the specified nodes
for node in nodes:
idaapi.set_node_info2(
function_address,
node.id,
node_info,
idaapi.NIF_BG_COLOR | idaapi.NIF_FRAME_COLOR
)
#------------------------------------------------------------------------------
# Painting - HexRays (Decompilation / Source)
#------------------------------------------------------------------------------
def paint_hexrays(vdui, function_coverage, color):
"""
Paint decompilation text for the given HexRays Window.
"""
decompilation_text = vdui.cfunc.get_pseudocode()
#
# the objective here is to paint hexrays lines that are associated with
# our coverage data. unfortunately, there are very few API resources that
# link decompilation line numbers to anything (eg, citems, nodes, ea, etc)
#
# this means that we must build our own data relationships to draw from
#
#
# first, let's build a relationship between a given line of text, and the
# citems that contribute to it. the only way to do that (as I see it) is
# to lex citem ID's out of the decompiled output string
#
line2citem = map_line2citem(decompilation_text)
#
# now that we have some understanding of how citems contribute to each
# line of decompiled text, we can use this information to build a
# relationship that ties graph nodes (basic blocks) to individual lines.
#
line2node = map_line2node(vdui.cfunc, line2citem)
# great, now we have all the information we need to paint
#
# paint hexrays output
#
lines_painted = 0
# extract the node ids that have been hit by our function's coverage data
coverage_indexes = set(node.id for node in function_coverage.exec_nodes)
#
# now we loop through every line_number of the decompiled text that claims
# to have a relationship with a graph node (basic block) and check to see
# if it contains a node our coverage has marked as executed
#
for line_number, node_indexes in line2node.iteritems():
#
# if there is any intersection of nodes on this line and the coverage
# data's set of executed nodes, color it
#
if node_indexes & coverage_indexes:
decompilation_text[line_number].bgcolor = color
lines_painted += 1
#
# done painting from our coverage data
#
# if there was nothing painted yet, there's no point in continuing...
if not lines_painted:
return
#
# if we made it this far, we must have painted *some* lines inside the
# function. that means we should paint the function decleration, and
# header (variable decleration) lines as their execution will be implied
#
for line_number in xrange(0, vdui.cfunc.hdrlines):
decompilation_text[line_number].bgcolor = color
lines_painted += 1
# finally, refresh the view
idaapi.refresh_idaview_anyway()
-1
View File
@@ -1 +0,0 @@
from drcov import DrcovData
-330
View File
@@ -1,330 +0,0 @@
#!/usr/bin/python
import os
import sys
import mmap
import struct
#------------------------------------------------------------------------------
# drcov log parser
#------------------------------------------------------------------------------
class DrcovData(object):
"""
DrcovData
"""
def __init__(self, filepath=None):
# drcov header attributes
self.version = 0
self.flavor = None
# drcov module table
self.module_table_count = 0
self.module_table_version = 0
self.modules = []
# drcov basic block data
self.bb_table_count = 0
self.bb_table_is_binary = True
self.basic_blocks = []
# parse the given filepath
self._parse_drcov_file(filepath)
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
def filter_by_module(self, module_name):
"""
Extract coverage blocks pertaining to the named module.
"""
# locate the coverage that matches the given module_name
for module in self.modules:
if module.filename == module_name:
mod_id = module.id
break
# failed to find a module that matches the given name, bail
else:
raise ValueError("Failed to find matching module in coverage")
# loop through the coverage data and filter out data for only this module
coverage_blocks = []
for bb in self.basic_blocks:
if bb.mod_id == mod_id:
coverage_blocks.append((bb.start, bb.size))
return coverage_blocks
#--------------------------------------------------------------------------
# Parsing Routines - Top Level
#--------------------------------------------------------------------------
def _parse_drcov_file(self, filepath):
"""
Parse drcov coverage from the given log file.
"""
with open(filepath, "rb") as infile:
f = mmap.mmap(infile.fileno(), 0, access=mmap.ACCESS_READ)
self._parse_drcov_header(f)
self._parse_module_table(f)
self._parse_bb_table(f)
def _parse_drcov_data(self, drcov_data):
"""
Parse drcov coverage from the given data blob.
"""
pass # TODO
#--------------------------------------------------------------------------
# Parsing Routines - Internals
#--------------------------------------------------------------------------
def _parse_drcov_header(self, f):
"""
Parse drcov log header from filestream.
"""
# parse drcov version from log
# eg: DRCOV VERSION: 2
version_line = f.readline().strip()
self.version = int(version_line.split(":")[1])
# parse drcov flavor from log
# eg: DRCOV FLAVOR: drcov
flavor_line = f.readline().strip()
self.flavor = flavor_line.split(":")[1]
assert self.version == 2, "Only drcov version 2 log files supported"
def _parse_module_table(self, f):
"""
Parse drcov log module table from filestream.
"""
self._parse_module_table_header(f)
self._parse_module_table_columns(f)
self._parse_module_table_modules(f)
def _parse_module_table_header(self, f):
"""
Parse drcov log module table header from filestream.
"""
# parse module table 'header'
# eg: Module Table: version 2, count 11
header_line = f.readline().strip()
field_name, field_data = header_line.split(": ")
#assert field_name == "Module Table"
# seperate 'version X' and 'count Y' from each other
version_data, count_data = field_data.split(", ")
# parse module table version out of 'version X'
data_name, version = version_data.split(" ")
#assert data_name == "version"
self.module_table_version = int(version)
# parse module count in table from 'count Y'
data_name, count = count_data.split(" ")
#assert data_name == "count"
self.module_table_count = int(count)
def _parse_module_table_columns(self, f):
"""
Parse drcov log module table columns from filestream.
"""
# parse module table 'columns'
# eg: Columns: id, base, end, entry, checksum, timestamp, path
column_line = f.readline().strip()
field_name, field_data = column_line.split(": ")
#assert field_name == "Columns"
# seperate column names
# eg: id, base, end, entry, checksum, timestamp, path
columns = field_data.split(", ")
#if self.module_table_version == 2:
#assert columns == ["id", "base", "end", "entry", "checksum", "timestamp", "path"]
def _parse_module_table_modules(self, f):
"""
Parse drcov log modules in the module table from filestream.
"""
# loop through each *expected* line in the module table and parse it
for i in xrange(self.module_table_count):
module = DrcovModule(f.readline().strip(), self.module_table_version)
self.modules.append(module)
def _parse_bb_table(self, f):
"""
Parse dcov log basic block table from filestream.
"""
self._parse_bb_table_header(f)
self._parse_bb_table_entries(f)
def _parse_bb_table_header(self, f):
"""
Parse drcov log basic block table header from filestream.
"""
# parse basic block table 'header'
# eg: BB Table: 2792 bbs
header_line = f.readline().strip()
field_name, field_data = header_line.split(": ")
#assert field_name == "BB Table"
# parse basic block count out of 'X bbs'
count_data, data_name = field_data.split(" ")
#assert data_name == "bbs"
self.bb_table_count = int(count_data)
# peek at the next few bytes to determine if this is a binary bb table.
# An ascii bb table will have the line: 'module id, start, size:'
token = "module id"
saved_position = f.tell()
# is this an ascii table?
if f.read(len(token)) == token:
self.bb_table_is_binary = False
f.readline() # yep, so dispose of the rest of this line
# nope! binary table, seek back to the start of the table
else:
self.bb_table_is_binary = True
f.seek(saved_position)
def _parse_bb_table_entries(self, f):
"""
Parse drcov log basic block table entries from filestream.
"""
# loop through each *expected* line/blob in the bb table and parse it
for i in xrange(self.bb_table_count):
# parse the current filestream data into a new basic block
if self.bb_table_is_binary:
basic_block = DrcovBasicBlock(f.read(DrcovBasicBlock.BYTESIZE))
else:
basic_block = DrcovBasicBlock(f.readline().strip(), binary=False)
# save the parsed block
self.basic_blocks.append(basic_block)
#------------------------------------------------------------------------------
# drcov module parser
#------------------------------------------------------------------------------
class DrcovModule(object):
"""
Parser & wrapper for module details as found in a drcov coverage log.
A 'module' in this context is a .EXE, .DLL, ELF, MachO, etc.
"""
def __init__(self, module_data, version):
self.id = 0
self.base = 0
self.end = 0
self.entry = 0
self.checksum = 0
self.timestamp = 0
self.path = ""
self.filename = ""
# parse the module
self._parse_module(module_data, version)
def _parse_module(self, module_line, version):
"""
Parse a Module Table v2 line.
"""
if version == 2:
data = module_line.split(", ")
# parse the individual fields from the module specification line
self.id = int(data[0])
self.base = int(data[1], 16)
self.end = int(data[2], 16)
self.entry = int(data[3], 16)
self.checksum = int(data[4], 16)
self.timestamp = int(data[5], 16)
self.path = str(data[6])
self.filename = os.path.basename(self.path)
# unknown format
else:
raise ValueError("Unknown module format (v%u)" % version)
#------------------------------------------------------------------------------
# drcov basic block parser
#------------------------------------------------------------------------------
class DrcovBasicBlock(object):
"""
Parser & wrapper for basic block details as found in a drcov coverage log.
NOTE:
Based off the C structure as used by drcov -
/* Data structure for the coverage info itself */
typedef struct _bb_entry_t {
uint start; /* offset of bb start from the image base */
ushort size;
ushort mod_id;
} bb_entry_t;
"""
BYTESIZE = 4 + 2 + 2 # uint + short + short
def __init__(self, bb_data, binary=True):
# basic block fields
self.start = 0
self.size = 0
self.mod_id = 0
# parse the basic block data
if binary:
self._parse_bb_binary(bb_data)
else:
self._parse_bb_ascii(bb_data)
def _parse_bb_binary(self, bb_data):
"""
Parse a binary basic block entry.
"""
assert len(bb_data) == self.BYTESIZE
self.start, = struct.unpack('<I', bb_data[0:4])
self.size, = struct.unpack('<H', bb_data[4:6])
self.mod_id, = struct.unpack('<H', bb_data[6:8])
def _parse_bb_ascii(self, bb_line):
"""
Parse an ascii basic block entry.
"""
raise ValueError("TODO: implement ascii/text bb_entry_t parsing")
#------------------------------------------------------------------------------
# Command Line Testing
#------------------------------------------------------------------------------
if __name__ == "__main__":
argc = len(sys.argv)
argv = sys.argv
# base usage
if argc < 2:
print "usage: %s <coverage filename>" % os.path.basename(sys.argv[0])
sys.exit()
# attempt file parse
x = DrcovData(argv[1])
for bb in x.basic_blocks:
print "0x%08x" % bb.start
-29
View File
@@ -1,29 +0,0 @@
import idaapi
from coverage_overview import *
#------------------------------------------------------------------------------
# IDA Action Handler Stub
#------------------------------------------------------------------------------
class IDACtxEntry(idaapi.action_handler_t):
"""
A basic Context Menu class to utilize IDA's action handlers.
"""
def __init__(self, action_function):
idaapi.action_handler_t.__init__(self)
self.action_function = action_function
def activate(self, ctx):
"""
Execute the embedded action_function when this context menu is invoked.
"""
self.action_function()
return 1
def update(self, ctx):
"""
Ensure the context menu is always available in IDA.
"""
return idaapi.AST_ENABLE_ALWAYS
-431
View File
@@ -1,431 +0,0 @@
import idaapi
import logging
from lighthouse.util import *
from operator import itemgetter, attrgetter
logger = logging.getLogger("Lighthouse.UI.Overview")
#------------------------------------------------------------------------------
# Coverage Data Proxy Model
#------------------------------------------------------------------------------
# declare named constants for coverage table column indexes
COV_PERCENT = 0
FUNC_NAME = 1
FUNC_ADDR = 2
BASIC_BLOCKS = 3
BRANCHES = 4
LINES = 5
FINAL_COLUMN = 7
# column -> field name mapping
COLUMN_TO_FIELD = \
{
COV_PERCENT: "percent_instruction",
FUNC_NAME: "name",
FUNC_ADDR: "address",
BASIC_BLOCKS: "node_count",
}
class CoverageModel(QtCore.QAbstractItemModel):
"""
A Qt model interface to format coverage data for Qt views.
"""
def __init__(self, db_coverage, parent=None):
super(CoverageModel, self).__init__(parent)
self._db_coverage = None
# a map to correlate a given row in the table to the function coverage
self.row2func = {}
self._rows = 0
# headers of the table
self._column_headers = \
{
COV_PERCENT: "Coverage %",
FUNC_NAME: "Function Name",
FUNC_ADDR: "Address",
BASIC_BLOCKS: "Basic Blocks",
BRANCHES: "Branches",
LINES: "Lines",
FINAL_COLUMN: "" # NOTE: stretch section, left blank for now
}
# used to make the model aware of its last sort state
self._last_sort = FUNC_ADDR
self._last_sort_order = QtCore.Qt.AscendingOrder
# used by the model to determine whether it should display 0% coverage entries
self._hide_zero = False
# update the model with the given coverage data
self.update_model(db_coverage)
#--------------------------------------------------------------------------
# AbstractItemModel Overloads
#--------------------------------------------------------------------------
def flags(self, index):
return QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable
def parent(self, index=QtCore.QModelIndex):
return QtCore.QModelIndex()
def index(self, row, column, parent=QtCore.QModelIndex()):
try:
return self.createIndex(row, column, self.row2func[row])
except KeyError as e:
return QtCore.QModelIndex()
def canFetchMore(self, index):
return True
def rowCount(self, index=QtCore.QModelIndex()):
"""
Return the number of rows in the model.
"""
return self._rows
def columnCount(self, index=QtCore.QModelIndex()):
"""
Return the number of columns in the model.
"""
return len(self._column_headers)
def headerData(self, column, orientation, role=QtCore.Qt.DisplayRole):
"""
Define the properties of how the table header should be displayed.
"""
if orientation == QtCore.Qt.Horizontal:
# the title of the header columns has been requested
if role == QtCore.Qt.DisplayRole:
try:
return self._column_headers[column]
except KeyError as e:
pass
# the text alignment of the header has beeen requested
elif role == QtCore.Qt.TextAlignmentRole:
# center align all columns
return QtCore.Qt.AlignHCenter
# unhandeled header request
return None
def data(self, index, role=QtCore.Qt.DisplayRole):
"""
Define how Qt should access the underlying model data.
"""
if not index.isValid():
return None
# ensure the row requested exists
if not (0 <= index.row() < self._rows):
return None
# text alignment request
if role == QtCore.Qt.TextAlignmentRole:
# we want to align the first column left
#if index.column() == 0:
# return QtCore.Qt.AlignVCenter
# center align all other columns
return QtCore.Qt.AlignCenter
# data display request
elif role == QtCore.Qt.DisplayRole:
# lookup the func coverage object for this row
func_coverage = self.row2func[index.row()]
# Coverage % - (by instruction execution)
if index.column() == COV_PERCENT:
return "%.2f%%" % (func_coverage.insn_percent*100)
# Function Name
elif index.column() == FUNC_NAME:
return func_coverage.name
# Function Address
elif index.column() == FUNC_ADDR:
return "0x%08X" % func_coverage.address
# Basic Blocks
elif index.column() == BASIC_BLOCKS:
return "%u / %u" % (func_coverage.exec_node_count,
func_coverage.node_count)
# Branches
elif index.column() == BRANCHES:
return "TODO"
# Source Lines
elif index.column() == LINES:
return "TODO"
# cell background color request
elif role == QtCore.Qt.BackgroundRole:
func_coverage = self.row2func[index.row()]
return func_coverage.coverage_color
# font color request
elif role == QtCore.Qt.ForegroundRole:
return QtGui.QColor(QtCore.Qt.white)
return None
def sort(self, column, sort_order):
"""
Sort coverage data model by column.
"""
#
# look up the name of field in the FunctionCoverage class object
# that we would like to sort by based on the selected column
#
try:
sort_field = COLUMN_TO_FIELD[column]
except KeyError as e:
logger.warning("TODO: implement column %u sorting" % column)
return False
#
# sort the existing entries in the table by the selected field name
#
# NOTE:
# using attrgetter appears to profile ~8-12% faster than lambdas
# accessing the member on the member, hence the strange paradigm
#
sorted_functions = sorted(
self.row2func.itervalues(), # row2func has the 'existing' entries
key=attrgetter(sort_field),
reverse=sort_order
)
# finally, rebuild the row2func mapping
self.layoutAboutToBeChanged.emit()
self.row2func = dict(zip(xrange(len(sorted_functions)), sorted_functions))
self.layoutChanged.emit()
# save this as the most recent sort type
self._last_sort = column
self._last_sort_order = sort_order
return True
#--------------------------------------------------------------------------
# Model Controls
#--------------------------------------------------------------------------
def hide_zero_coverage(self, hide=True):
"""
Toggle zero coverage entries as visible.
"""
# state change matches current state, nothing to do
if self._hide_zero == hide:
return
# rebuild the row map, using the new state (hide/unhide 0% items)
self._hide_zero = hide
self._init_row2func_map()
# emit a layout changed signal if the sort attempt did not
if not self.sort(self._last_sort, self._last_sort_order):
self.layoutChanged()
def update_model(self, db_coverage):
"""
Replace the underlying data source and re-generate model mappings.
"""
self._db_coverage = db_coverage
# initialize a new row2func map as the coverage data has changed
self._init_row2func_map()
# let consumers know that we have updated the model
self.layoutChanged.emit()
def _init_row2func_map(self):
"""
Initialize the mapping to go from displayed row to function.
"""
row = 0
self.row2func = {}
# no coverage, nothing else to do
if not self._db_coverage:
return
functions = self._db_coverage.functions.itervalues()
# only map items with a non-zero coverage as visible
if self._hide_zero:
for func_coverage in functions:
if func_coverage.percent_instruction:
self.row2func[row] = func_coverage
row += 1
#
# map all items as visible. faster to have this loop seperate from
# the above so that we don't have to check a conditional every
# iteration when not in use
#
else:
for func_coverage in self._db_coverage.functions.itervalues():
self.row2func[row] = func_coverage
row += 1
self._rows = len(self.row2func)
#------------------------------------------------------------------------------
# Coverage Overview
#------------------------------------------------------------------------------
class CoverageOverview(idaapi.PluginForm):
"""
The Coverage Overview Qt Widget.
TODO
"""
def __init__(self, db_coverage):
super(CoverageOverview, self).__init__()
self._title = "Coverage Overview"
self._model = CoverageModel(db_coverage)
def Show(self):
"""
Show the dialog.
"""
return super(CoverageOverview, self).Show(
self._title,
options=idaapi.PluginForm.FORM_PERSIST
)
def OnCreate(self, form):
"""
Called when the view is created.
"""
# NOTE/COMPAT
if using_pyqt5():
self.parent = self.FormToPyQtWidget(form)
else:
self.parent = self.FormToPySideWidget(form)
# set window icon to the coverage overview icon
self.parent.setWindowIcon(QtGui.QIcon(resource_file("icons\overview.png")))
#
# coverage list table
#
self.table = QtWidgets.QTreeView()
self.table.setRootIsDecorated(False)
self.table.setUniformRowHeights(True)
self.table.setExpandsOnDoubleClick(False)
# enable sorting on the table, default to sort by func address
self.table.setSortingEnabled(True)
self.table.header().setSortIndicator(FUNC_ADDR, QtCore.Qt.AscendingOrder)
# install a drawing delegate to draw the grid lines on the list view
delegate = GridDelegate(self.table)
self.table.setItemDelegate(delegate)
# install the data source for the list view
self.table.setModel(self._model)
#
# coverage list toolbar (& members)
#
self.toolbar = QtWidgets.QToolBar()
# checkbox to hide 0% coverage entries
self.hide_zero_label = QtWidgets.QLabel(" Hide 0% Coverage: ")
self.hide_zero_checkbox = QtWidgets.QCheckBox()
# populate the toolbar
self.toolbar.addWidget(self.hide_zero_label)
self.toolbar.addWidget(self.hide_zero_checkbox)
#
# ui signals
#
# connect a signal to jump to the function disas described by a row
self.table.doubleClicked.connect(self._ui_entry_double_click)
#self.table.setContextMenuPolicy(Qt.CustomContextMenu)
#self.table.customContextMenuRequested.connect(...)
self.hide_zero_checkbox.stateChanged.connect(self._ui_hide_zero_toggle)
#
# ui layout
#
layout = QtWidgets.QGridLayout()
layout.addWidget(self.table)
layout.addWidget(self.toolbar)
# install layout
self.parent.setLayout(layout)
def update_model(self, db_coverage):
"""
Passthrough to update underlying model.
"""
self._model.update_model(db_coverage)
#--------------------------------------------------------------------------
# Signal Handlers
#--------------------------------------------------------------------------
def _ui_entry_double_click(self, index):
"""
Handle double click event on the coverage table view.
"""
# a double click on the table view will jump the user to the clicked
# function in the disassembly view
try:
idaapi.jumpto(self._model.row2func[index.row()].address)
except KeyError as e:
pass
def _ui_hide_zero_toggle(self, checked):
"""
Handle state change of 'Hide 0% Coverage' checkbox.
"""
self._model.hide_zero_coverage(checked)
#------------------------------------------------------------------------------
# Painting
#------------------------------------------------------------------------------
class GridDelegate(QtWidgets.QStyledItemDelegate):
"""
Used solely to draw a grid in the CoverageOverview.
"""
def __init__(self, parent=None):
super(GridDelegate, self).__init__(parent)
self.grid_color = QtGui.QColor(QtCore.Qt.black)
def paint(self, painter, option, index):
super(GridDelegate, self).paint(painter, option, index)
painter.save()
painter.setPen(self.grid_color)
painter.drawRect(option.rect)
painter.restore()
-77
View File
@@ -1,77 +0,0 @@
import os
import cProfile
import idaapi
from ida import *
from log import lmsg, logging_started, start_logging
from qtshim import using_pyqt5, QtCore, QtGui, QtWidgets
pr = cProfile.Profile()
def profile(func):
"""
Function profiling decorator.
"""
def wrap(*args, **kwargs):
global pr
pr.enable()
result = func(*args, **kwargs)
pr.disable()
pr.print_stats(sort="tottime")
return result
return wrap
def get_disas_bg_color():
"""
Get the background color of the disas text area via pixel... YOLO
PS: please expose the get_graph_color(...) palette accessor, Ilfak ;_;
"""
# find a form (eg, IDA view) to steal a pixel from
for i in xrange(5):
form = idaapi.find_tform("IDA View-%c" % chr(ord('A') + i))
if form:
break
else:
raise RuntimeError("Failed to find donor IDA View")
# lookup the Qt Widget for the given form and take 2px tall image
if using_pyqt5():
widget = idaapi.PluginForm.FormToPyQtWidget(form)
pixmap = widget.grab(QtCore.QRect(0, 0, widget.width(),2))
else:
widget = idaapi.PluginForm.FormToPySideWidget(form)
pixmap = QtGui.QPixmap.grabWidget(widget, QtCore.QRect(0, 0, widget.width(), 2))
# extract a pixel from the top center like a pleb (hopefully a background pixel :|)
img = QtGui.QImage(pixmap.toImage())
color = QtGui.QColor(img.pixel(img.width()/2,1))
# return the color of the pixel we extracted
return color
def compute_color_on_gradiant(percent, color1, color2):
"""
Compute the color specified by a percent between two colors.
"""
# dump the rgb values from QColor objects
r1, g1, b1, _ = color1.getRgb()
r2, g2, b2, _ = color2.getRgb()
# compute the new color across the gradiant of color1 -> color 2
r = r1 + percent * (r2 - r1)
g = g1 + percent * (g2 - g1)
b = b1 + percent * (b2 - b1)
# return the new color
return QtGui.QColor(r,g,b)
def resource_file(filename):
"""
Return the absolute 'resource' filepath for a given filename.
"""
return os.path.join(idaapi.idadir("plugins"), "lighthouse", "ui", "resources", filename)
-305
View File
@@ -1,305 +0,0 @@
import logging
import collections
import idaapi
logger = logging.getLogger("Lighthouse.Util.IDA")
#------------------------------------------------------------------------------
# FlowChart Helpers
#------------------------------------------------------------------------------
#
# Profiling revealed that working with flowcharts was creating the most
# expensive set of operations for Lighthouse. Specifically:
#
# * Creating/requesting a flowchart from IDA
# * Repeatedly walking a flowchart from its base indexe (for our purposes)
# * idaapi.FlowChart & BasicBlock come with their own unecessary overhead
#
# To try to make our flowchart operations as fast as possible throughout
# Lighthouse, we do our best to minimize the above three cases with the
# strategies outlined below.
#
# * Cache the last N flowcharts requested in an LRU cache implementation
# * Cache & reuse the last node index used for a given flowchart
# * Use qflow_chart_t directly to remove FlowChart & BasicBlock overhead
#
class FlowChartCache(object):
"""
A LRU cache implementation for IDA FlowChart lookup.
TODO: describe how & why the cache works
"""
def __init__(self, capacity=6):
self.cache = collections.deque([], capacity)
def get(self, address):
"""
Cached lookup of the flowchart for a given address.
On cache-miss, a new flowchart is generated.
"""
# cache hit
for cache_entry in self.cache:
bounds = cache_entry[0].bounds
if bounds.startEA <= address < bounds.endEA:
#logger.debug("0x%08X: cache hit!" % address)
return cache_entry
#
# flow chart is NOT in the cache...
#
#logger.debug("0x%08X: cache miss!" % address)
# create a new flowchart corresponding to the address
function = idaapi.get_func(address)
flowchart = idaapi.qflow_chart_t("", function, idaapi.BADADDR, idaapi.BADADDR, 0)
# cache the newly created flowchart
cache_entry = (flowchart, 0)
self.set(cache_entry)
# return the created flowchart entry
return cache_entry
def set(self, cache_entry):
"""
Update the cache with the given entry.
"""
function_address = cache_entry[0].bounds.startEA
# evict an old entry if it exists
for i in xrange(len(self.cache)):
if self.cache[i][0].bounds.startEA == function_address:
del self.cache[i]
break
# put this new cache entry at the front of the list
self.cache.appendleft(cache_entry)
def map_flowchart(function_address):
"""
Map a FlowChart and its node bounds for fast access.
-----------------------------------------------------------------------
Walking the IDAPython flowcharts can actually be really slow. when we
need to repeatedly access or walk a given flowchart, we should instead
extract its layout one-time and use this minimal form when applicable.
-----------------------------------------------------------------------
Output:
+- flowchart_nodes:
| a map keyed with node ID's, holding a tuple of node bounds
|
| eg: { int(node_id): (startEA, endEA), ... }
'
"""
flowchart_nodes = {}
# retrieve the flowchart for this function
function = idaapi.get_func(function_address)
flowchart = idaapi.qflow_chart_t("", function, idaapi.BADADDR, idaapi.BADADDR, 0)
# cache the bounds for every node in this flowchart
for i in xrange(flowchart.size()):
node = flowchart[i]
flowchart_nodes[i] = (node.startEA, node.endEA)
return flowchart_nodes
#------------------------------------------------------------------------------
# HexRays Helpers
#------------------------------------------------------------------------------
def lex_citem_indexes(line):
"""
Lex all ctree item indexes from a given line of text.
"""
i = 0
indexes = []
line_length = len(line)
# lex COLOR_ADDR tokens from the line of text
while i < line_length:
# does this character mark the start of a new COLOR_* token?
if line[i] == idaapi.COLOR_ON:
# yes, so move past the COLOR_ON byte
i += 1
# is this sequence for a COLOR_ADDR?
if ord(line[i]) == idaapi.COLOR_ADDR:
# yes, so move past the COLOR_ADDR byte
i += 1
#
# A COLOR_ADDR token is followed by either 8, or 16 characters
# (a hex encoded number) that represents an address/pointer.
# in this context, it is actually the index number of a citem
#
citem_index = int(line[i:i+idaapi.COLOR_ADDR_SIZE], 16)
i += idaapi.COLOR_ADDR_SIZE
# save the extracted citem index
indexes.append(citem_index)
# skip to the next iteration as i has moved
continue
# nothing we care about happened, keep lexing forward
i += 1
# return all the citem indexes extracted from this line of text
return indexes
def map_line2citem(decompilation_text):
"""
Map decompilation line numbers to citems.
-----------------------------------------------------------------------
This function allows us to build a relationship between citems in the
ctree and specific lines in the hexrays decompilation text.
-----------------------------------------------------------------------
Output:
+- line2citem:
| a map keyed with line numbers, holding sets of citem indexes
|
| eg: { int(line_number): sets(citem_indexes), ... }
'
"""
line2citem = {}
#
# it turns out that citem indexes are actually stored inline with the
# decompilation text output, hidden behind COLOR_ADDR tokens.
#
# here we pass each line of raw decompilation text to our crappy lexer,
# extracting any COLOR_ADDR tokens as citem indexes
#
for line_number in xrange(decompilation_text.size()):
line_text = decompilation_text[line_number].line
line2citem[line_number] = lex_citem_indexes(line_text)
return line2citem
def map_line2node(cfunc, line2citem):
"""
Map decompilation line numbers to graph nodes (basic blocks).
-----------------------------------------------------------------------
This function allows us to build a relationship between graph nodes
(basic blocks) and specific lines in the hexrays decompilation text.
-----------------------------------------------------------------------
Output:
+- line2node:
| a map keyed with line numbers, holding sets of node ID's
|
| eg: { int(line_number): sets(nodes), ... }
'
"""
line2node, cached_base = {}, 0
# bake items for faster access
treeitems = cfunc.treeitems
flowchart = map_flowchart(cfunc.entry_ea)
flowchart_size = len(flowchart)
#
# prior to this function, a line2citem map was built to tell us which
# citems reside on any given line of text in the decompilation output.
#
# now, we walk through this line2citem map one 'line_number' at a time in
# an effort to resolve the set of graph nodes associated with its citems.
#
for line_number, citem_indexes in line2citem.iteritems():
nodes = set()
#
# we are at the level of a single line (line_number). we now consume
# its set of citems (citem_indexes) and attempt to identify the explict
# graph nodes they claim to be sourced from (by their reported EA)
#
for index in citem_indexes:
# get the code address of the given citem
item = treeitems[index]
address = item.ea
#
# walk the flowchart for this function and find the graph node
# (eg, basic block) that generated this citem
#
for index in xrange(flowchart_size):
node_id = (cached_base + index) % flowchart_size
startEA, endEA = flowchart[node_id]
#
# does this citem address point into this graph node? if so,
# consider it a match, and break so we can save this node_id
#
if startEA <= address < endEA:
break
#
# we never broke from the loop which means that the current citem
# could not be mapped to a node... weird. continue to the next citem
#
else:
#logger.warning("Failed to map node to basic block")
continue
#
# we broke from walking the flowchart, so we must have found a node
# that contains this citem. save the computed node_id to the list of
# of known nodes we have associated with this line of text
#
nodes.add(node_id)
#
# to expedite future citem lookups, we cache the node_id we hit on
# for this flowchart walk, and use it as the starting point for the
# subsequent search. It's more likely than not that the next citem
# will fall in the same node, or one closer to this node than the
# start of the flowchart (eg, node_id = 0)
#
cached_base = node_id
#
# finally, save the completed list of node ids as identified for this
# line of decompilation text to the line2node map that we are building
#
line2node[line_number] = nodes
# all done, return the computed map
return line2node
-25
View File
@@ -1,25 +0,0 @@
import idaapi
#------------------------------------------------------------------------------
# Pyside --> PyQt5 - COMPAT
#------------------------------------------------------------------------------
#
# NOTE:
# As of IDA 6.9, Hex-Rays has started using PyQt5 versus PySide on Qt4.
# This file tries to help us cut back from having as much compatibility
# checks/churn by in every other file that consumes them.
#
def using_pyqt5():
major, minor = map(int, idaapi.get_kernel_version().split("."))
return (major == 6 and minor >= 9)
if using_pyqt5():
import PyQt5.QtGui as QtGui
import PyQt5.QtCore as QtCore
import PyQt5.QtWidgets as QtWidgets
else:
import PySide.QtGui as QtGui
import PySide.QtCore as QtCore
QtWidgets = QtGui
-468
View File
@@ -1,468 +0,0 @@
from idaapi import plugin_t
from lighthouse.ui import *
from lighthouse.util import *
from lighthouse.parsers import *
from lighthouse.coverage import *
from lighthouse.painting import *
# start the global logger *once*
if not logging_started():
logger = start_logging()
#------------------------------------------------------------------------------
# IDA Plugin
#------------------------------------------------------------------------------
PLUGIN_VERSION = "0.1.0"
AUTHORS = "Markus Gaasedelen"
DATE = "2017"
def PLUGIN_ENTRY():
"""
Required plugin entry point for IDAPython Plugins.
"""
return Lighthouse()
class Lighthouse(plugin_t):
"""
The Lighthouse IDA Plugin.
"""
flags = idaapi.PLUGIN_PROC | idaapi.PLUGIN_MOD
comment = "Code Coverage Explorer"
help = ""
wanted_name = "Lighthouse"
wanted_hotkey = ""
def __init__(self):
# plugin color palette
self.palette = LighthousePalette()
self.color = 0
#----------------------------------------------------------------------
# the database coverage data conglomerate
self.db_coverage = DatabaseCoverage()
# hexrays hooks
self._hxe_events = None
# plugin qt elements
self._ui_coverage_list = CoverageOverview(self.db_coverage)
# members for the 'Load Code Coverage' menu entry
self._icon_id_load = idaapi.BADADDR
self._action_name_load = "lighthouse:load_coverage"
# members for the 'Coverage Overview' menu entry
self._icon_id_overview = idaapi.BADADDR
self._action_name_overview = "lighthouse:coverage_overview"
#--------------------------------------------------------------------------
# IDA Plugin Overloads
#--------------------------------------------------------------------------
def init(self):
"""
This is called by IDA when it is loading the plugin.
"""
# attempt plugin initialization
try:
self._install_plugin()
# failed to initialize or integrate the plugin, log and skip loading
except Exception as e:
logger.exception("Failed to initialize")
return idaapi.PLUGIN_SKIP
# plugin loaded successfully, print the Lighthouse banner
self.print_banner()
logger.info("Successfully initialized")
# tell IDA to keep the plugin loaded (everything is okay)
return idaapi.PLUGIN_KEEP
def run(self, arg):
"""
This is called by IDA when this file is loaded as a script.
"""
idaapi.warning("The Lighthouse plugin cannot be run as a script.")
def term(self):
"""
This is called by IDA when it is unloading the plugin.
"""
# attempt to cleanup and uninstall our plugin instance
try:
self._uninstall_plugin()
# failed to cleanly remove the plugin, log failure
except Exception as e:
logger.exception("Failed to cleanly unload the plugin")
logger.info("-"*75)
logger.info("Plugin terminated")
#--------------------------------------------------------------------------
# Initialization
#--------------------------------------------------------------------------
def _install_plugin(self):
"""
Initialize & integrate the plugin into IDA.
"""
self._install_ui()
# TODO/NOTE: let's delay these till coverage load instead
#self._install_hexrays_hooks()
def _install_hexrays_hooks(self, _=None):
"""
Install Hexrays hook listeners.
"""
# event hooks appear to already be installed for hexrays
if self._hxe_events:
return
# ensure hexrays is loaded & ready for use
if not idaapi.init_hexrays_plugin():
raise RuntimeError("HexRays is not available yet")
#
# map our callback function to an actual member since we can't properly
# remove bindings from IDA callback registrations otherwise. it also
# makes installation tracking/status easier.
#
self._hxe_events = self._hexrays_callback
# install the callback handler
idaapi.install_hexrays_callback(self._hxe_events)
def print_banner(self):
"""
Print the Lighthouse plugin banner.
"""
# build the main banner title
banner_params = (PLUGIN_VERSION, AUTHORS, DATE)
banner_title = "Lighthouse v%s - (c) %s - %s" % banner_params
# print plugin banner
lmsg("")
lmsg("-"*75)
lmsg("---[ %s" % banner_title)
lmsg("-"*75)
lmsg("")
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _install_ui(self):
"""
Initialize & integrate all UI elements.
"""
# install the 'Load Coverage' file dialog
self._install_load_file_dialog()
self._install_open_coverage_overview()
def _install_load_file_dialog(self):
"""
Install the 'File->Load->Code Coverage File(s)...' menu entry.
"""
# createa a custom IDA icon
self._icon_id_load = idaapi.load_custom_icon(
#data=str(QtCore.QResource(":/icons/overview.png").data())
data=str(open(resource_file("icons/load.png"), "rb").read())
)
# describe the action
# add an menu entry to the options dropdown on the IDA toolbar
action_desc = idaapi.action_desc_t(
self._action_name_load, # The action name.
"~C~ode Coverage File(s)...", # The action text.
IDACtxEntry(self.load_code_coverage), # The action handler.
None, # Optional: action shortcut
"Load a code coverage file for this IDB", # Optional: tooltip
self._icon_id_load # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register load coverage action with IDA")
# attach the action to the File-> dropdown menu
result = idaapi.attach_action_to_menu(
"File/Load file/", # Relative path of where to add the action
self._action_name_load, # The action ID (see above)
idaapi.SETMENU_APP # We want to append the action after ^
)
if not result:
RuntimeError("Failed to attach load action to 'File/Load file/' dropdown")
logger.info("Installed the 'Load Code Coverage' menu entry")
def _install_open_coverage_overview(self):
"""
Install the 'View->Open subviews->Coverage Overview' menu entry.
"""
# createa a custom IDA icon
self._icon_id_overview = idaapi.load_custom_icon(
#data=str(QtCore.QResource(":/icons/overview.png").data())
data=str(open(resource_file("icons/overview.png"), "rb").read())
)
# describe the action
# add an menu entry to the options dropdown on the IDA toolbar
action_desc = idaapi.action_desc_t(
self._action_name_overview, # The action name.
"~C~overage Overview", # The action text.
IDACtxEntry(self.open_coverage_overview), # The action handler.
None, # Optional: action shortcut
"Open database code coverage overview", # Optional: tooltip
self._icon_id_overview # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register open coverage overview action with IDA")
# attach the action to the File-> dropdown menu
result = idaapi.attach_action_to_menu(
"View/Open subviews/Hex dump", # Relative path of where to add the action
self._action_name_overview, # The action ID (see above)
idaapi.SETMENU_INS # We want to append the action after ^
)
if not result:
RuntimeError("Failed to attach open action to 'subviews' dropdown")
logger.info("Installed the 'Coverage Overview' menu entry")
#--------------------------------------------------------------------------
# Termination
#--------------------------------------------------------------------------
def _uninstall_plugin(self):
"""
Cleanup & uninstall the plugin from IDA.
"""
self._uninstall_ui()
# TODO: uninstall hxe hooks
#--------------------------------------------------------------------------
# Termination - UI
#--------------------------------------------------------------------------
def _uninstall_ui(self):
"""
Cleanup & uninstall the plugin UI from IDA.
"""
self._uninstall_open_coverage_overview()
self._uninstall_load_file_dialog()
def _uninstall_load_file_dialog(self):
"""
Remove the 'File->Load file->Code Coverage File(s)...' menu entry.
"""
# remove the entry from the File-> menu
result = idaapi.detach_action_from_menu(
"File/Load file/", # Relative path of where we put the action
self._action_name_load)
if not result:
return False
# unregister the action
result = idaapi.unregister_action(self._action_name_load)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_load)
self._icon_id_load = idaapi.BADADDR
logger.info("Uninstalled the 'Load Code Coverage' menu entry")
def _uninstall_open_coverage_overview(self):
"""
Remove the 'View->Open subviews->Coverage Overview' menu entry.
"""
# remove the entry from the View-> menu
result = idaapi.detach_action_from_menu(
"View/Open subviews/Hex dump", # Relative path of where we put the action
self._action_name_overview)
if not result:
return False
# unregister the action
result = idaapi.unregister_action(self._action_name_overview)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_overview)
self._icon_id_overview = idaapi.BADADDR
logger.info("Uninstalled the 'Coverage Overview' menu entry")
#--------------------------------------------------------------------------
# UI - Actions
#--------------------------------------------------------------------------
def load_code_coverage(self):
"""
Interactive file dialog based loading of Code Coverage.
"""
# prompt the user with a QtFileDialog to select coverage files
coverage_files = self._select_code_coverage_files()
if not coverage_files:
return
# load the selected code coverage files into the plugin core
for filename in coverage_files:
self.load_code_coverage_file(filename)
# done loading coverage files, bake metrics
self.db_coverage.finalize(self.palette)
# install hexrays hooks if available for this arch/install
try:
self._install_hexrays_hooks()
except RuntimeError:
pass
#
# depending on if IDA is using a dark or light theme, we paint
# coverage with a color that will hopefully keep things readable.
# determine whether to use a 'dark' or 'light' paint
#
bg_color = get_disas_bg_color()
if bg_color.lightness() > 255.0/2:
self.color = self.palette.paint_light
else:
self.color = self.palette.paint_dark
# color the database based on coverage
paint_coverage(self.db_coverage, self.color)
# show the coverage overview
self.open_coverage_overview()
def open_coverage_overview(self):
"""
Open the Coverage Overview dialog.
"""
# ensure the database coverage is installed in the coverage overview
self._ui_coverage_list.update_model(self.db_coverage)
# make the coverage overview visible
self._ui_coverage_list.Show()
def _select_code_coverage_files(self):
"""
Open the 'Load Code Coverage' dialog and capture file selections.
"""
# create & configure a Qt File Dialog for immediate use
file_dialog = QtWidgets.QFileDialog(None, 'Open Code Coverage File(s)')
file_dialog.setFileMode(QtWidgets.QFileDialog.ExistingFiles)
# prompt the user with the file dialog, and await filename(s)
filenames, _ = file_dialog.getOpenFileNames()
logger.debug("Captured filenames from file dialog:")
logger.debug(filenames)
return filenames
#--------------------------------------------------------------------------
# Misc
#--------------------------------------------------------------------------
def load_code_coverage_file(self, filename):
"""
Load code coverage file by filename.
NOTE: At this time only binary drcov logs are supported.
"""
# load coverage data from file
coverage_data = DrcovData(filename)
# extract the coverage relevant to this IDB (well, the root binary)
root_filename = idaapi.get_root_filename()
coverage_blocks = coverage_data.filter_by_module(root_filename)
# enlighten the databases' coverage hub to this new data
base = idaapi.get_imagebase()
self.db_coverage.add_coverage(base, coverage_blocks)
def _hexrays_callback(self, event, *args):
"""
HexRays callback event handler.
"""
# decompilation text generation is complete and it is about to be shown
if event == idaapi.hxe_text_ready:
vdui = args[0]
# grab the coverage data for the function of this decompilation
try:
function_coverage = self.db_coverage.functions[vdui.cfunc.entry_ea]
except KeyError:
return 0
# if coverage is zero for this function, there's nothing to paint
if not function_coverage.percent_instruction:
return 0
# paint the decompilation for this function
paint_hexrays(vdui, function_coverage, self.color)
return 0
#------------------------------------------------------------------------------
# IDA Plugin Palette
#------------------------------------------------------------------------------
class LighthousePalette(object):
"""
Color Palette for the Lighthouse plugin.
TODO: external customization
"""
def __init__(self):
"""
Initialize default palette colors for Lighthouse.
"""
# blue to red - 'dark' theme
self.coverage_bad = QtGui.QColor(221, 0, 0)
self.coverage_good = QtGui.QColor(51, 153, 255)
# green to red - 'light' theme
#self.coverage_bad = QtGui.QColor(207, 31, 0)
#self.coverage_good = QtGui.QColor(75, 209, 42)
# TODO: unused for now
self.profiling_cold = QtGui.QColor(0,0,0)
self.profiling_hot = QtGui.QColor(0,0,0)
# color used for painting disassembly/graph/hexrays
self.paint_dark = 0x00990000 # NOTE: IDA uses BBGGRR
self.paint_light = 0x00C8E696
+1
View File
@@ -0,0 +1 @@
get_context = lambda x: None
+2
View File
@@ -0,0 +1,2 @@
from .shell import ComposingShell
from .parser import CompositionParser
+453
View File
@@ -0,0 +1,453 @@
import re
import string
import operator
#------------------------------------------------------------------------------
# Text Tokens
#------------------------------------------------------------------------------
class TextToken(object):
"""
A single tokenized text element.
TextTokens are effectively wrappers for individual regex matches found
when tokenizing a text string (eg, a composition string). They provide
location and type information for the token as it exists in the string.
Besides being used to normalize and guide the parsing of a given
string, TextTokens can be used for things like syntax highlighting.
"""
def __init__(self, match):
self.type = match.lastgroup
self.value = (str(match.group())).upper()
self.span = match.span()
@property
def index(self):
return self.span[0]
#
# COVERAGE_TOKEN:
# 'A' | 'B' | 'C' | ... | 'Z'
#
# NOTE: this is now dynamically computed in parse(...)
#COVERAGE_TOKEN = r'(?P<COVERAGE_TOKEN>[A-Za-z])'
AGGREGATE_TOKEN = '*'
#
# LOGIC_TOKEN:
# '|' | '^' | '&' | '-'
#
OR = r'(?P<OR>\|)'
XOR = r'(?P<XOR>\^)'
AND = r'(?P<AND>\&)'
MINUS = r'(?P<MINUS>-)'
#
# Misc Tokens
#
LPAREN = r'(?P<LPAREN>\()'
RPAREN = r'(?P<RPAREN>\))'
COMMA = r'(?P<COMMA>\,)'
WS = r'(?P<WS>\s+)'
UNKNOWN = r'(?P<UNKNOWN>.)'
TOKEN_DEFINITIONS = [OR, XOR, AND, MINUS, LPAREN, RPAREN, COMMA, WS, UNKNOWN]
#------------------------------------------------------------------------------
# AST Tokens
#------------------------------------------------------------------------------
class AstToken(object):
"""
Base class for Abstract Syntax Tree (AST) Tokens.
The Tokens subclassed from AstToken are used to build an abstract
syntax tree representing a composition equation.
Once generated, an AST can be logically evaluated by Lighthouse's
director to compose a new coverage set described by the tree.
"""
def __init__(self):
self.nodes = []
self.text_tokens = []
class TokenNull(AstToken):
"""
AST Token indicating a NULL / empty composition.
"""
def __init__(self):
super(TokenNull, self).__init__()
class TokenLogicOperator(AstToken):
"""
AST Token for a logical operator.
eg: '|'
"""
def __init__(self, logic_op, op1, op2=None):
super(TokenLogicOperator, self).__init__()
self.text_tokens = [logic_op]
# logic operator
self.operator = self.str2op(logic_op.value)
# referenced operands
self.op1 = op1
self.op2 = op2
# save the operand expressions as children
self.nodes = [op1, op2]
@staticmethod
def str2op(op_char):
if op_char == '|':
return operator.or_
if op_char == '&':
return operator.and_
if op_char == '^':
return operator.xor
if op_char == '-':
return operator.sub
raise ValueError("Unknown Operator")
class TokenCoverageSingle(AstToken):
"""
AST Token for a single coverage reference.
eg: 'A'
"""
def __init__(self, coverage_single):
super(TokenCoverageSingle, self).__init__()
self.text_tokens = [coverage_single]
# referenced coverage set
self.symbol = coverage_single.value
#------------------------------------------------------------------------------
# AST Operations
#------------------------------------------------------------------------------
def ast_equal(first, second):
"""
A fail-safe equality of the structure and contents of two AST.
This is not a true (logical) equality check. Two AST's may evaluate to
the same logical result, but have a slightly different structure which
will trigger this check to return False.
This is primarily used to check if a user specified AST has changed, and
if we should probably re-evaluate the tree (composition).
"""
# both trees are 'NULL' / empty AST
if isinstance(first, TokenNull) and isinstance(second, TokenNull):
return True
# recursively evaluate the AST's
return _ast_equal_recursive(first, second)
def _ast_equal_recursive(first, second):
"""
The internal (recursive) AST evaluation routine.
"""
#
# if the left and right types are not identical at every step, the tree
# is obviously different somehow
#
if type(first) != type(second):
return False
#
# if both tokens are terminating / None, they are a match
#
if first == second == None:
return True
#
# if the current node is a logic operator, we need to evaluate the
# expressions that make up its input.
#
elif isinstance(first, TokenLogicOperator):
if not _ast_equal_recursive(first.op1, second.op1):
return False
if not _ast_equal_recursive(first.op2, second.op2):
return False
return first.operator == second.operator
#
# if the current node is a coverage token, we need simply need
# to compare its symbol.
#
elif isinstance(first, TokenCoverageSingle):
return first.symbol == second.symbol
#
# unknown token? (this should never happen)
#
raise ValueError("Unknown token types, cannot compare them...")
#------------------------------------------------------------------------------
# Parsing
#------------------------------------------------------------------------------
class ParseError(SyntaxError):
"""
Exception raised when composition parsing fails.
A ParseError will provide some contextual information to how and why
the parser failed. Information gleaned through the exception can still
be consumed for user hints, syntax highlighting, or other uses.
"""
def __init__(self, message, expected, error_token, parsed_tokens):
super(ParseError, self).__init__(message)
self.expected = expected
self.error_token = error_token
self.parsed_tokens = parsed_tokens
if error_token == self.parsed_tokens[-1]:
fail = self.parsed_tokens.pop()
self.error_index = fail.span[0]
else:
self.error_index = self.parsed_tokens[-1].span[1]
def __str__(self):
return "%s: at %s, %s" % (self.__class__.__name__, self.error_token.span, self.msg)
#------------------------------------------------------------------------------
# Composition Parser
#------------------------------------------------------------------------------
class CompositionParser(object):
"""
A simple recursive descent parser for Compositions.
Heavily modified from:
https://rockie-yang.gitbooks.io/python-cookbook/content/ch2/simple_parser.html
#----------------------------------------------------------------------
Below is the spec for the LL(1) 'Composition Grammar' that was designed
to generically parse coverage composition equations form raw text.
#----------------------------------------------------------------------
COMPOSITION:
EXPRESSION COMPOSITION_TAIL
COMPOSITION_TAIL:
LOGIC_TOKEN COMPOSITION | None
EXPRESSION:
'(' EXPRESSION ')' COMPOSITION_TAIL | COVERAGE COMPOSITION_TAIL
COVERAGE_TOKEN:
'A' | 'B' | 'C' | ... | 'Z' | AGGREGATE_TOKEN
LOGIC_TOKEN:
'&' | '|' | '^' | '-' | None
"""
def parse(self, text, coverage_tokens):
"""
Parse a string using the Composition Grammar.
Returns an Abstract Syntax Tree (AST) of the parsed input.
Raises ParseError on parse failure.
"""
# if the string is *only* whitespace, return an empty, but valid parse
if not text.strip():
return ([], TokenNull())
#
# we used to parse [A-Za-z] as the COVERAGE_TOKEN, but that means we
# would technically tokenize and construct trees with COVERAGE_TOKEN's
# that have no matching (eg invalid) loaded coverage data.
#
# now we construct the COVERAGE_TOKEN regex just before parsing.
# this enables us to tokenize/parse only the shorthand names that
# reflect the state of loaded coverage
#
COVERAGE_TOKEN = r'(?P<COVERAGE_TOKEN>[%s])' % ''.join(coverage_tokens + [AGGREGATE_TOKEN])
#
# if there were any coverage tokens defined, then we definitely need
# the constructed COVERAGE_TOKEN regex in our grammar list.
#
if coverage_tokens:
TOKEN_REGEXES = [COVERAGE_TOKEN] + TOKEN_DEFINITIONS
else:
TOKEN_REGEXES = TOKEN_DEFINITIONS
# build our master tokenizer regex pattern to parse the text stream
master_pattern = re.compile('|'.join(TOKEN_REGEXES))
# reset the parser's runtime variables
self._parsed_tokens = []
self.current_token = None
self.next_token = None
# tokenize the raw text stream
self.tokens = self._generate_tokens(master_pattern, text)
# initialize the parser state by bumping the parser onto the first token
self._advance()
# parse the token stream using the grammar defined by this class
ast = self._COMPOSITION()
# if there are any tokens remaining in the stream, the text is invalid
if self.next_token:
self._parse_error("Expected $$", TokenNull)
# return the parsed tokens and generated AST
return (self._parsed_tokens, ast)
#--------------------------------------------------------------------------
# Token Stream Operations
#--------------------------------------------------------------------------
def _advance(self):
"""
Advance one token in the token stream.
"""
self.current_token, self.next_token = self.next_token, next(self.tokens, None)
def _accept(self, token_type):
"""
Match and accept the lookahead token.
"""
if self.next_token and self.next_token.type == token_type:
self._advance()
return True
else:
return False
def _generate_tokens(self, regex_pattern, text):
"""
Generate a TextToken stream using a given regex token pattern and text.
"""
scanner = regex_pattern.scanner(text)
for m in iter(scanner.match, None):
token = TextToken(m)
self._parsed_tokens.append(token)
if token.type != 'WS': # ignore whitespace tokens
yield token
def _parse_error(self, message, expected):
"""
Raises a ParseError, capturing elements of the parser state.
"""
raise ParseError(message, expected, self.next_token, self._parsed_tokens)
#--------------------------------------------------------------------------
# Grammar Rules
#--------------------------------------------------------------------------
def _COMPOSITION(self):
"""
COMPOSITION:
EXPRESSION COMPOSITION_TAIL
"""
expression = self._EXPRESSION()
return self._COMPOSITION_TAIL(expression)
def _COMPOSITION_TAIL(self, head):
"""
COMPOSITION_TAIL:
LOGIC_TOKEN COMPOSITION | None
"""
#
# LOGIC_TOKEN COMPOSITION
#
logic_op = self._LOGIC_TOKEN()
if logic_op:
composition = self._COMPOSITION()
return TokenLogicOperator(logic_op, head, composition)
#
# None
#
# no COMPOSITION_TAIL to parse, simply return the leading expression
return head
def _EXPRESSION(self):
"""
EXPRESSION:
'(' EXPRESSION ')' COMPOSITION_TAIL | COVERAGE_TOKEN COMPOSITION_TAIL
"""
#
# ['(' EXPRESSION ')'] COMPOSITION_TAIL
#
if self._accept('LPAREN'):
# parse left paren
left_paren = self.current_token
# parse the expression
expression = self._EXPRESSION() # THESE ARE CHILDREN
# parse the right paren
if not self._accept('RPAREN'):
self._parse_error("Expected RPAREN", TextToken)
right_paren = self.current_token
# inject parenthesis TextTokens into the expression
expression.text_tokens.append(left_paren)
expression.text_tokens.append(right_paren)
#
# [COVERAGE] COMPOSITION_TAIL
#
else:
expression = self._COVERAGE_TOKEN()
# ... [COMPOSITION_TAIL]
return self._COMPOSITION_TAIL(expression)
def _COVERAGE_TOKEN(self):
"""
COVERAGE_TOKEN:
'A' | 'B' | 'C' | ... | 'Z' | AGGREGATE_TOKEN
"""
if self._accept("COVERAGE_TOKEN"):
return TokenCoverageSingle(self.current_token)
self._parse_error("Expected COVERAGE_TOKEN", TokenCoverageSingle)
def _LOGIC_TOKEN(self):
"""
LOGIC_TOKEN:
'&' | '|' | '^' | '-' | None
"""
if self._accept("OR") or \
self._accept("XOR") or \
self._accept("AND") or \
self._accept("MINUS"):
return self.current_token
return None
File diff suppressed because it is too large Load Diff
+111
View File
@@ -0,0 +1,111 @@
import os
import logging
from lighthouse.util.qt import *
from lighthouse.painting import CoveragePainter
from lighthouse.director import CoverageDirector
from lighthouse.coverage import DatabaseCoverage
from lighthouse.metadata import DatabaseMetadata
from lighthouse.util.disassembler import disassembler, DisassemblerContextAPI
logger = logging.getLogger("Lighthouse.Context")
#------------------------------------------------------------------------------
# Lighthouse Session Context
#------------------------------------------------------------------------------
class LighthouseContext(object):
"""
A database/binary-unique instance of Lighthouse and its subsystems.
"""
def __init__(self, core, dctx):
disassembler[self] = DisassemblerContextAPI(dctx)
self.core = core
self.dctx = dctx
self._started = False
# the database metadata cache
self.metadata = DatabaseMetadata(self)
# the coverage engine
self.director = CoverageDirector(self.metadata, self.core.palette)
# the coverage painter
self.painter = CoveragePainter(self, self.director, self.core.palette)
# the coverage overview widget
self.coverage_overview = None
# the directory to start the coverage file dialog in
self._last_directory = None
@property
def palette(self):
return self.core.palette
def start(self):
"""
One-time activation a Lighthouse context and its subsystems.
"""
if self._started:
return
self.core.palette.warmup()
self.metadata.start()
self.director.start()
self.painter.start()
# TODO/BINJA remove this ASAP, or find a better workaround... I hate having this here
if disassembler.NAME == "BINJA":
disassembler.hide_dockable("Feature Map")
self._started = True
def terminate(self):
"""
Spin down any session subsystems before the session is deleted.
"""
if not self._started:
return
self.painter.terminate()
self.director.terminate()
self.metadata.terminate()
def select_coverage_files(self):
"""
Prompt a file selection dialog, returning file selections.
NOTE: This saves & reuses the last known directory for subsequent uses.
"""
if not self._last_directory:
self._last_directory = disassembler[self].get_database_directory()
# create & configure a Qt File Dialog for immediate use
file_dialog = QtWidgets.QFileDialog(
None,
'Open code coverage file',
self._last_directory,
'All Files (*.*)'
)
file_dialog.setFileMode(QtWidgets.QFileDialog.ExistingFiles)
# prompt the user with the file dialog, and await filename(s)
filenames, _ = file_dialog.getOpenFileNames()
#
# remember the last directory we were in (parsed from a selected file)
# for the next time the user comes to load coverage files
#
if filenames:
self._last_directory = os.path.dirname(filenames[0]) + os.sep
# log the captured (selected) filenames from the dialog
logger.debug("Captured filenames from file dialog:")
for name in filenames:
logger.debug(" - %s" % name)
# return the captured filenames
return filenames
+847
View File
@@ -0,0 +1,847 @@
import os
import time
import logging
import weakref
import itertools
import collections
from lighthouse.util import *
from lighthouse.util.qt import compute_color_on_gradient
from lighthouse.metadata import DatabaseMetadata
logger = logging.getLogger("Lighthouse.Coverage")
#------------------------------------------------------------------------------
# Coverage Mapping
#------------------------------------------------------------------------------
#
# When raw runtime data (eg, coverage or trace data) is passed into the
# director, it is stored internally in DatabaseCoverage objects. A
# DatabaseCoverage object (as defined below) roughly equates to a single
# loaded coverage file.
#
# Besides holding loaded coverage data, the DatabaseCoverage objects are
# also responsible for mapping the coverage data to the open database using
# the lifted metadata described in metadata.py.
#
# The 'mapping' objects detailed in this file exist only as a thin layer on
# top of the lifted database metadata.
#
# As mapping objects retain the raw runtime data internally, we are
# able to rebuild mappings should the database structure (and its metadata)
# get updated or refreshed by the user.
#
#------------------------------------------------------------------------------
# Database Coverage
#------------------------------------------------------------------------------
class DatabaseCoverage(object):
"""
Database level coverage mapping.
"""
def __init__(self, palette, name="", filepath=None, data=None):
# color palette
self.palette = palette
# the name of the DatabaseCoverage object
self.name = name
# the filepath this coverage data was sourced from
self.filepath = filepath
# the timestamp of the coverage file on disk
try:
self.timestamp = os.path.getmtime(filepath)
except (OSError, TypeError):
self.timestamp = time.time()
#
# this is the coverage mapping's reference to the underlying database
# metadata. it will use this for all its mapping operations.
#
# here we simply populate the DatabaseCoverage object with a stub
# DatabaseMetadata object, but at runtime we will inject a fully
# collected DatabaseMetadata object as maintained by the director.
#
self._metadata = DatabaseMetadata()
#
# the address hitmap is a dictionary that effectively holds the lowest
# level representation of the original coverage data loaded from disk.
#
# as the name implies, the hitmap will track the number of times a
# given address appeared in the original coverage data.
#
# Eg:
# hitmap =
# {
# 0x8040100: 1,
# 0x8040102: 1,
# 0x8040105: 3,
# 0x8040108: 3, # 0x8040108 was executed 3 times...
# 0x804010a: 3,
# 0x804010f: 1,
# ...
# }
#
# the hitmap gives us an interesting degree of flexibility with regard
# to what data sources we can load coverage data from, and how we
# choose to consume it (eg, visualize coverage, heatmaps, ...)
#
# using hitmap.keys(), we effectively have a coverage bitmap of all
# the addresses executed in the coverage log
#
self._hitmap = collections.Counter(data)
self._imagebase = BADADDR
#
# the coverage hash is a simple hash of the coverage mask (hitmap keys)
#
# it is primarily used by the director as a means of quickly comparing
# two database coverage objects against each other, and speculating on
# the output of logical/arithmetic operations of their coverage data.
#
# this hash will need to be recomputed via _update_coverage_hash()
# anytime new coverage data is introduced to this object, or when the
# hitmap is otherwise modified internally.
#
# this is necessary because we cache the coverage hash. computing the
# hash on demand is expensive, and it really shouldn't changne often.
#
# see the usage of 'coverage_hash' in director.py for more info
#
self.coverage_hash = 0
self._update_coverage_hash()
#
# unmapped data is a list of addresses that we have coverage for, but
# could not map to any defined function in the database.
#
# a shortcoming of lighthouse (as recently as v0.8) is that it does
# *not* compute statistics for, or paint, loaded coverage that falls
# outside of defined functions.
#
# under normal circumstances, one can just define a function at the
# area of interest (assuming it was a disassembler issue) and refresh
# the lighthouse metadata to 'map' the missing coverage.
#
# in cases of obfuscation, abnormal control flow, or self modifying
# code, lighthouse will probably not perform well. but to be fair,
# lighthouse was designed for displaying coverage more-so than hit
# tracing or trace exploration.
#
# initially, all loaded coverage data is marked as unmapped
#
self.unmapped_addresses = set(self._hitmap.keys())
#
# at runtime, the map_coverage() member function of this class is
# responsible for taking the unmapped_data mapping it on top of the
# lifted database metadata (self._metadata).
#
# the process of mapping the raw coverage data will yield NodeCoverage
# and FunctionCoverage objects. these are the buckets that the unmapped
# coverage data is poured into during the mappinng process.
#
# NodeCoverage objects represent coverage at the node (basic block)
# level and are owned by a respective FunctionCoverage object.
#
# FunctionCoverage represent coverage at the function level, grouping
# children NodeCoverage objects and providing higher level statistics.
#
# self.nodes: address --> NodeCoverage
# self.functions: address --> FunctionCoverage
#
self.nodes = {}
self.functions = {}
self.instruction_percent = 0.0
# blocks that have not been fully executed (eg, crash / exception)
self.partial_nodes = set()
self.partial_instructions = set()
# addresses that have been executed, but are not in a defined node
self.orphan_addresses = set()
#
# we instantiate a single weakref of ourself (the DatbaseCoverage
# object) such that we can distribute it to the children we create
# without having to repeatedly instantiate new ones.
#
self._weak_self = weakref.proxy(self)
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def data(self):
"""
Return the backing coverage data (a hitmap).
"""
return self._hitmap
@property
def coverage(self):
"""
Return the coverage (address) bitmap/mask.
"""
return viewkeys(self._hitmap)
@property
def suspicious(self):
"""
Return a bool indicating if the coverage seems badly mapped.
"""
bad = 0
total = len(self.nodes)
if not total:
return 0.0
#
# count the number of nodes (basic blocks) that allegedly were executed
# (they have coverage data) but don't actually have their first
# instruction logged as executed.
#
# this is considered 'suspicious' and should be a red flag that the
# provided coverage data is malformed, or for a different binary
#
for adddress, node_coverage in iteritems(self.nodes):
if adddress in node_coverage.executed_instructions:
continue
bad += 1
# compute a percentage of the 'bad nodes'
percent = (bad/float(total))*100
logger.debug("SUSPICIOUS: %5.2f%% (%u/%u)" % (percent, bad, total))
#
# if the percentage of 'bad' coverage nodes is too high, we consider
# this database coverage as 'suspicious' or 'badly mapped'
#
# this number (2%) may need to be tuned. really any non-zero figure
# is strange, but we will give some wiggle room for DBI or
# disassembler fudginess.
#
return percent > 2.0
#--------------------------------------------------------------------------
# Metadata Population
#--------------------------------------------------------------------------
def update_metadata(self, metadata, delta=None):
"""
Install a new databasee metadata object.
"""
self._metadata = weakref.proxy(metadata)
#
# if the underlying database / metadata gets rebased, we will need to
# rebase our coverage data. the 'raw' coverage data stored in the
# hitmap is stored as absolute addresses for performance reasons
#
# here we compute the offset that we will need to rebase the coverage
# data by should a rebase have occurred
#
rebase_offset = self._metadata.imagebase - self._imagebase
#
# if the coverage's imagebase is still BADADDR, that means that this
# coverage object hasn't yet been mapped onto a given metadata cache.
#
# that's fine, we just need to initialize our imagebase which should
# (hopefully!) match the imagebase originally used when baking the
# coverage data into an absolute address form.
#
if self._imagebase == BADADDR:
self._imagebase = self._metadata.imagebase
self._normalize_coverage()
#
# if the imagebase for this coverage exists, then it is susceptible to
# being rebased by a metadata update. if rebase_offset is non-zero,
# this is an indicator that a rebase has occurred.
#
# when a rebase occurs in the metadata, we must also rebase our
# coverage data (stored in the hitmap)
#
elif rebase_offset:
self._hitmap = { (address + rebase_offset): hits for address, hits in iteritems(self._hitmap) }
self._imagebase = self._metadata.imagebase
#
# since the metadata has been updated in one form or another, we need
# to trash our existing coverage mapping, and rebuild it from the data.
#
self.unmap_all()
def refresh(self):
"""
Refresh the mapping of our coverage data to the database metadata.
"""
# rebuild our coverage mapping
dirty_nodes, dirty_functions = self._map_coverage()
# bake our coverage map
self._finalize(dirty_nodes, dirty_functions)
# update the coverage hash incase the hitmap changed
self._update_coverage_hash()
def refresh_theme(self):
"""
Refresh UI facing elements to reflect the current theme.
Does not require @disassembler.execute_ui decorator as no Qt is touched.
"""
for function in self.functions.values():
function.coverage_color = compute_color_on_gradient(
function.instruction_percent,
self.palette.table_coverage_bad,
self.palette.table_coverage_good
)
def _finalize(self, dirty_nodes, dirty_functions):
"""
Finalize the DatabaseCoverage statistics / data for use.
"""
self._finalize_nodes(dirty_nodes)
self._finalize_functions(dirty_functions)
self._finalize_instruction_percent()
def _finalize_nodes(self, dirty_nodes):
"""
Finalize the NodeCoverage objects statistics / data for use.
"""
metadata = self._metadata
for address, node_coverage in iteritems(dirty_nodes):
node_coverage.finalize()
# save off a reference to partially executed nodes
if node_coverage.instructions_executed != metadata.nodes[address].instruction_count:
self.partial_nodes.add(address)
else:
self.partial_nodes.discard(address)
# finalize the set of instructions executed in partially executed nodes
instructions = []
for node_address in self.partial_nodes:
instructions.append(self.nodes[node_address].executed_instructions)
self.partial_instructions = set(itertools.chain.from_iterable(instructions))
def _finalize_functions(self, dirty_functions):
"""
Finalize the FunctionCoverage objects statistics / data for use.
"""
for function_coverage in itervalues(dirty_functions):
function_coverage.finalize()
def _finalize_instruction_percent(self):
"""
Finalize the DatabaseCoverage's coverage % by instructions executed.
"""
# sum all the instructions in the database metadata
total = sum(f.instruction_count for f in itervalues(self._metadata.functions))
if not total:
self.instruction_percent = 0.0
return
# sum the unique instructions executed across all functions
executed = sum(f.instructions_executed for f in itervalues(self.functions))
# save the computed percentage of database instructions executed (0 to 1.0)
self.instruction_percent = float(executed) / total
#--------------------------------------------------------------------------
# Data Operations
#--------------------------------------------------------------------------
def add_data(self, data, update=True):
"""
Add an existing instruction hitmap to the coverage mapping.
"""
# add the given runtime data to our data source
for address, hit_count in iteritems(data):
self._hitmap[address] += hit_count
# do not update other internal structures if requested
if not update:
return
# update the coverage hash in case the hitmap changed
self._update_coverage_hash()
# mark these touched addresses as dirty
self.unmapped_addresses |= viewkeys(data)
def add_addresses(self, addresses, update=True):
"""
Add a list of instruction addresses to the coverage mapping.
"""
# increment the hit count for an address
for address in addresses:
self._hitmap[address] += 1
# do not update other internal structures if requested
if not update:
return
# update the coverage hash in case the hitmap changed
self._update_coverage_hash()
# mark these touched addresses as dirty
self.unmapped_addresses |= set(addresses)
def subtract_data(self, data):
"""
Subtract an existing instruction hitmap from the coverage mapping.
"""
# subtract the given hitmap from our existing hitmap
for address, hit_count in iteritems(data):
self._hitmap[address] -= hit_count
#
# if there is no longer any hits for this address, delete its
# entry from the hitmap dictionary. we don't want its entry to
# hang around because we use self._hitmap.viewkeys() as a
# coverage bitmap/mask
#
if not self._hitmap[address]:
del self._hitmap[address]
# update the coverage hash as the hitmap has probably changed
self._update_coverage_hash()
#
# unmap everything because a complete re-mapping is easier with the
# current implementation of things
#
self.unmap_all()
def mask_data(self, coverage_mask):
"""
Mask the hitmap data against a given coverage mask.
Returns a new DatabaseCoverage containing the masked hitmap.
"""
composite_data = collections.defaultdict(int)
# preserve only hitmap data that matches the coverage mask
for address in coverage_mask:
composite_data[address] = self._hitmap[address]
# done, return a new DatabaseCoverage masked with the given coverage
return DatabaseCoverage(self.palette, data=composite_data)
def _update_coverage_hash(self):
"""
Update the hash of the coverage mask.
"""
if self._hitmap:
self.coverage_hash = hash(frozenset(viewkeys(self._hitmap)))
else:
self.coverage_hash = 0
#--------------------------------------------------------------------------
# Coverage Mapping
#--------------------------------------------------------------------------
def _normalize_coverage(self):
"""
Normalize basic block coverage into instruction coverage.
TODO: It would be interesting if we could do away with this entirely,
working off the original instruction/bb coverage data (hitmap) instead.
"""
coverage_addresses = viewkeys(self._hitmap)
if not coverage_addresses:
return
# bucketize the exploded coverage addresses
instructions = coverage_addresses & self._metadata.instructions
basic_blocks = instructions & viewkeys(self._metadata.nodes)
#
# here we attempt to compute the ratio between basic block addresses,
# and instruction addresses in the incoming coverage data.
#
# this will help us determine if the existing instruction data is
# sufficient, or whether we need to explode/flatten the basic block
# addresses into their respective child instructions
#
block_ratio = len(basic_blocks) / float(len(instructions))
block_trace_confidence = 0.80
logger.debug("Block confidence %f" % block_ratio)
#
# a low basic block to instruction ratio implies the data is probably
# from an instruction trace, or a drcov trace that was exploded from
# (bb_address, size) into its respective addresses
#
if block_ratio < block_trace_confidence:
return
#
# take each basic block address, and explode it into a list of all the
# instruction addresses contained within the basic block as determined
# by the database metadata cache
#
# it is *possible* that this may introduce 'inaccurate' paint should
# the user provide a basic block trace that crashes mid-block. but
# that is not something we can account for in a block trace...
#
for bb_address in basic_blocks:
bb_hits = self._hitmap[bb_address]
for inst_address in self._metadata.nodes[bb_address].instructions:
self._hitmap[inst_address] = bb_hits
logger.debug("Converted basic block trace to instruction trace...")
def _map_coverage(self):
"""
Map loaded coverage data to the underlying database metadata.
"""
dirty_nodes = self._map_nodes()
dirty_functions = self._map_functions(dirty_nodes)
return (dirty_nodes, dirty_functions)
def _map_nodes(self):
"""
Map loaded coverage data to database defined nodes (basic blocks).
"""
db_metadata = self._metadata
dirty_nodes = {}
# the coverage data we will attempt to process in this function
coverage_addresses = sorted(self.unmapped_addresses)
#
# the loop below is the core of our coverage mapping process.
#
# operating on whatever coverage data (instruction addresses) reside
# within unmapped_data, this loop will attempt to bucket the coverage
# into NodeCoverage objects where possible.
#
# the higher level coverage mappings (eg FunctionCoverage,
# DatabaseCoverage) get built on top of the node mapping that we
# perform here.
#
# since this loop is the most computationally expensive part of the
# mapping process, it has been carefully profiled & optimized for
# speed. please be careful if you wish to modify it...
#
i, num_addresses = 0, len(coverage_addresses)
while i < num_addresses:
# get the next coverage address to map
address = coverage_addresses[i]
# get the node (basic block) metadata that this address falls in
node_metadata = db_metadata.get_node(address)
#
# should we fail to locate node metadata for the coverage address
# that we are trying to map, then the address must not fall inside
# of a defined function
#
if not node_metadata:
self.orphan_addresses.add(address)
if address in db_metadata.instructions:
self.unmapped_addresses.discard(address)
i += 1
continue
#
# we found applicable node metadata for this address, now we will
# try to find an existing bucket (NodeCoverage) for the address
#
if node_metadata.address in self.nodes:
node_coverage = self.nodes[node_metadata.address]
#
# failed to locate an existing NodeCoverage object for this
# address, it looks like this is the first time we have attempted
# to bucket coverage for this node.
#
# create a new NodeCoverage bucket and use it now
#
else:
node_coverage = NodeCoverage(node_metadata.address, self._weak_self)
self.nodes[node_metadata.address] = node_coverage
# alias for speed, prior to looping
node_start = node_metadata.address
node_end = node_start + node_metadata.size
#
# the loop below is as an inlined fast-path that assumes the next
# several coverage addresses will likely belong to the same node
# that we just looked up (or created) in the code above
#
# we can simply re-use the current node and its coverage object
# until the next address to be processed falls outside the node
#
while 1:
#
# map the hitmap data for the current address if it falls on
# an actual instruction start within the node
#
# if the address falls within an instruction, it will just be
# 'ignored', remaining in the 'unmapped' / invisible data
#
if address in node_metadata.instructions:
node_coverage.executed_instructions[address] = self._hitmap[address]
self.unmapped_addresses.discard(address)
# get the next address to attempt mapping on
try:
i += 1
address = coverage_addresses[i]
# an IndexError implies there is nothing left to map...
except IndexError:
break
#
# if the next address is not in this node, it's time break out
# of this loop and send it through the full node lookup path
#
if not (node_start <= address < node_end):
break
# the node was updated, so save its coverage as dirty
dirty_nodes[node_metadata.address] = node_coverage
# done, return a map of NodeCoverage objects that were modified
return dirty_nodes
def _map_functions(self, dirty_nodes):
"""
Map loaded coverage data to database defined functions.
"""
dirty_functions = {}
#
# thanks to the map_nodes(), we now have a repository of NodeCoverage
# objects that are considered 'dirty' and can be used precisely to
# build or update the function level coverage metadata
#
for node_coverage in itervalues(dirty_nodes):
#
# using a given NodeCoverage object, we retrieve its underlying
# metadata so that we can perform a reverse lookup of its function
# (parent) metadata.
#
functions = self._metadata.get_functions_by_node(node_coverage.address)
#
# now we will attempt to retrieve the FunctionCoverage objects
# that we need to parent the given NodeCoverage object to
#
for function_metadata in functions:
function_coverage = self.functions.get(function_metadata.address, None)
#
# if we failed to locate the FunctionCoverage for a function
# that references this node, then it is the first time we have
# seen coverage for it.
#
# create a new coverage function object and use it now.
#
if not function_coverage:
function_coverage = FunctionCoverage(function_metadata.address, self._weak_self)
self.functions[function_metadata.address] = function_coverage
# add the NodeCoverage object to its parent FunctionCoverage
function_coverage.mark_node(node_coverage)
dirty_functions[function_metadata.address] = function_coverage
# done, return a map of FunctionCoverage objects that were modified
return dirty_functions
def unmap_all(self):
"""
Unmap all mapped coverage data.
"""
# clear out the processed / computed coverage data structures
self.nodes = {}
self.functions = {}
self.partial_nodes = set()
self.partial_instructions = set()
self.orphan_addresses = set()
# dump the source coverage data back into an 'unmapped' state
self.unmapped_addresses = set(self._hitmap.keys())
#------------------------------------------------------------------------------
# Function Coverage
#------------------------------------------------------------------------------
class FunctionCoverage(object):
"""
Function level coverage mapping.
"""
def __init__(self, function_address, database=None):
self.database = database
self.address = function_address
# addresses of nodes executed
self.nodes = {}
# compute the # of instructions executed by this function's coverage
self.instruction_percent = 0.0
self.node_percent = 0.0
# baked colors
self.coverage_color = 0
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def hits(self):
"""
Return the number of instruction executions in this function.
"""
return sum(x.hits for x in itervalues(self.nodes))
@property
def nodes_executed(self):
"""
Return the number of unique nodes executed in this function.
"""
return len(self.nodes)
@property
def instructions_executed(self):
"""
Return the number of unique instructions executed in this function.
"""
return sum(x.instructions_executed for x in itervalues(self.nodes))
@property
def instructions(self):
"""
Return the executed instruction addresses in this function.
"""
return set([ea for node in itervalues(self.nodes) for ea in node.executed_instructions.keys()])
#--------------------------------------------------------------------------
# Controls
#--------------------------------------------------------------------------
def mark_node(self, node_coverage):
"""
Save the given NodeCoverage to this function.
"""
self.nodes[node_coverage.address] = node_coverage
def finalize(self):
"""
Finalize the FunctionCoverage data for use.
"""
function_metadata = self.database._metadata.functions[self.address]
# compute the % of nodes executed
self.node_percent = float(self.nodes_executed) / function_metadata.node_count
# compute the % of instructions executed
self.instruction_percent = \
float(self.instructions_executed) / function_metadata.instruction_count
# the sum of node executions in this function
node_sum = sum(x.executions for x in itervalues(self.nodes))
# the estimated number of executions this function has experienced
self.executions = float(node_sum) / function_metadata.node_count
# bake colors
self.coverage_color = compute_color_on_gradient(
self.instruction_percent,
self.database.palette.table_coverage_bad,
self.database.palette.table_coverage_good
)
#------------------------------------------------------------------------------
# Node Coverage
#------------------------------------------------------------------------------
class NodeCoverage(object):
"""
Node (basic block) level coverage mapping.
"""
def __init__(self, node_address, database=None):
self.database = database
self.address = node_address
self.executed_instructions = {}
self.instructions_executed = 0
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def hits(self):
"""
Return the number of instruction executions in this node.
"""
return sum(itervalues(self.executed_instructions))
#--------------------------------------------------------------------------
# Controls
#--------------------------------------------------------------------------
def finalize(self):
"""
Finalize the coverage metrics for faster access.
"""
node_metadata = self.database._metadata.nodes[self.address]
# the estimated number of executions this node has experienced.
self.executions = float(self.hits) / node_metadata.instruction_count
# the number of unique instructions executed
self.instructions_executed = len(self.executed_instructions)
File diff suppressed because it is too large Load Diff
+146
View File
@@ -0,0 +1,146 @@
from lighthouse.util.log import lmsg
from lighthouse.util.misc import iteritems
from lighthouse.util.disassembler import disassembler
#------------------------------------------------------------------------------
# Exception Definitions
#------------------------------------------------------------------------------
class LighthouseError(Exception):
"""
An error generated by Lighthouse.
"""
def __init__(self, *args, **kwargs):
super(LighthouseError, self).__init__(*args, **kwargs)
#------------------------------------------------------------------------------
# Coverage File Exceptions
#------------------------------------------------------------------------------
class CoverageException(LighthouseError):
"""
A class of errors pertaining to loading & mapping coverage files.
"""
name = NotImplementedError
description = NotImplementedError
def __init__(self, message, filepath):
super(CoverageException, self).__init__(message)
self.filepath = filepath
@property
def verbose(self):
return "Error: %s\n\n%s" % (self.name, self.description)
def __str__(self):
return self.message + " '%s'" % self.filepath
class CoverageParsingError(CoverageException):
"""
An error generated by the CoverageReader when all parsers fail.
"""
name = "PARSE_FAILURE"
description = \
"Failed to parse one or more of the selected coverage files!\n\n" \
" Possible reasons:\n" \
" - You selected a file that was *not* a coverage file.\n" \
" - The selected coverage file is malformed or unreadable.\n" \
" - A suitable parser for the coverage file is not installed.\n\n" \
"Please see the disassembler console for more info..."
def __init__(self, filepath, tracebacks):
super(CoverageParsingError, self).__init__("Failed to parse coverage file", filepath)
self.tracebacks = tracebacks
class CoverageMissingError(CoverageException):
"""
An error generated when no data was extracted from a CoverageFile.
"""
name = "NO_COVERAGE_ERROR"
description = \
"No usable coverage data was extracted from one of the selected files.\n\n" \
" Possible reasons:\n" \
" - You selected a coverage file for the wrong binary.\n" \
" - The name of the executable file used to generate this database\n" \
" is different than the one you collected coverage against.\n" \
" - Your DBI failed to collect any coverage for this binary.\n\n" \
"Please see the disassembler console for more info..."
def __init__(self, filepath):
super(CoverageMissingError, self).__init__("No coverage extracted from file", filepath)
class CoverageMappingAbsent(CoverageException):
"""
A warning generated when coverage data cannot be mapped.
"""
name = "NO_COVERAGE_MAPPED"
description = \
"One or more of the loaded coverage files has no visibly mapped data.\n\n" \
" Possible reasons:\n" \
" - The loaded coverage data does not fall within defined functions.\n" \
" - You loaded an absolute address trace with a different imagebase.\n" \
" - The coverage data might be corrupt or malformed.\n\n" \
"Please see the disassembler console for more info..."
def __init__(self, coverage):
super(CoverageMappingAbsent, self).__init__("No coverage data could be mapped", coverage.filepath)
self.coverage = coverage
class CoverageMappingSuspicious(CoverageException):
"""
A warning generated when coverage data does not appear to match the database.
"""
name = "BAD_COVERAGE_MAPPING"
description = \
"One or more of the loaded coverage files appears to be badly mapped.\n\n" \
" Possible reasons:\n" \
" - You selected the wrong binary/module to load coverage from.\n" \
" - Your coverage file/data is for a different version of the\n" \
" binary that does not match what the disassembler has open.\n" \
" - You recorded self-modifying code or something with very\n" \
" abnormal control flow (obfuscated code, malware, packers).\n" \
" - The coverage data might be corrupt or malformed.\n\n" \
"This means that any coverage displayed by Lighthouse is PROBABLY\n" \
"WRONG and is not be trusted because the coverage data does not\n" \
"appear to match the disassembled binary."
def __init__(self, coverage):
super(CoverageMappingSuspicious, self).__init__("Coverage data appears badly mapped", coverage.filepath)
self.coverage = coverage
#------------------------------------------------------------------------------
# UI Warnings
#------------------------------------------------------------------------------
def warn_errors(errors, ignore=[]):
"""
Warn the user of any encountered errors with a messagebox.
"""
if not errors:
return
for error_type, error_list in iteritems(errors):
#
# loop through the individual instances/files that caused this error
# and dump the results to the disassembler console...
#
lmsg("-"*50)
lmsg("Files reporting %s:" % error_type.name)
for error in error_list:
lmsg(" - %s" % error.filepath)
# suppress popups for certain errors, if the user has specified such
if error_type in ignore:
continue
#
# popup a more verbose error messagebox for the user to read regarding
# this class of error they encountered
#
disassembler.warning(error.verbose)
# done ...
lmsg("-"*50)
@@ -0,0 +1,263 @@
import ctypes
import logging
from binaryninja import PluginCommand
from binaryninjaui import UIAction, UIActionHandler, Menu
from lighthouse.context import LighthouseContext
from lighthouse.integration.core import LighthouseCore
from lighthouse.util.disassembler import disassembler
logger = logging.getLogger("Lighthouse.Binja.Integration")
#------------------------------------------------------------------------------
# Lighthouse Binja Integration
#------------------------------------------------------------------------------
class LighthouseBinja(LighthouseCore):
"""
Lighthouse UI Integration for Binary Ninja.
"""
def __init__(self):
super(LighthouseBinja, self).__init__()
def get_context(self, dctx, startup=True):
"""
Get the LighthouseContext object for a given database context.
In Binary Ninja, a dctx is a BinaryView (BV).
"""
dctx_id = ctypes.addressof(dctx.handle.contents)
#
# create a new LighthouseContext if this is the first time a context
# has been requested for this BNDB / bv
#
if dctx_id not in self.lighthouse_contexts:
# create a new 'context' representing this BNDB / bv
lctx = LighthouseContext(self, dctx)
if startup:
lctx.start()
# save the created ctx for future calls
self.lighthouse_contexts[dctx_id] = lctx
#
# for binja, we basically *never* want to start the lighthouse ctx
# when it is first created. this is because binja will *immediately*
# create a coverage overview widget for every database when it is
# first opened.
#
# this is annoying, because we don't want to actually start up all
# of the lighthouse threads and subsystems unless the user actually
# starts trying to use lighthouse for their session.
#
# so we initialize the lighthouse context (with start()) on the
# second context request which will go through the else block
# below... any subsequent call to start() is effectively a nop!
#
else:
lctx = self.lighthouse_contexts[dctx_id]
lctx.start()
# return the lighthouse context object for this database ctx / bv
return lctx
def binja_close_context(self, dctx):
"""
Attempt to close / spin-down the LighthouseContext for the given dctx.
In Binary Ninja, a dctx is a BinaryView (BV).
"""
dctx_id = ctypes.addressof(dctx.handle.contents)
# fetch the LighthouseContext for the closing BNDB
try:
lctx = self.lighthouse_contexts.pop(dctx_id)
#
# if lighthouse was not actually used for this BNDB / session, then
# the lookup will fail as there is nothing to spindown
#
except KeyError:
return
# spin down the closing context (stop threads, cleanup qt state, etc)
logger.info("Closing a LighthouseContext...")
lctx.terminate()
#--------------------------------------------------------------------------
# UI Integration (Internal)
#--------------------------------------------------------------------------
#
# TODO / HACK / XXX / V35 / 2021: Some of Binja's UI elements (such as the
# terminal) do not get assigned a BV, even if there is only one open.
#
# this is problematic, because if the user 'clicks' onto the terminal, and
# then tries to execute our UIActions (like 'Load Coverage File'), the
# given 'context.binaryView' will be None
#
# in the meantime, we have to use this workaround that will try to grab
# the 'current' bv from the dock. this is not ideal, but it will suffice.
#
# -----------------
#
# XXX: It's now 2024, Binja's UI / API stack has grown a lot. it's more
# powerful and a bunch of the oddities / hacks lighthouse employed for
# binja may no longer apply. this whole file should probably be revisited
# and re-factored at some point point.. sorry if it's hard to follow
#
def _interactive_load_file(self, context):
dctx = disassembler.binja_get_bv_from_dock()
if not dctx:
disassembler.warning("Lighthouse requires an open BNDB to load coverage.")
return
super(LighthouseBinja, self).interactive_load_file(dctx)
def _interactive_load_batch(self, context):
dctx = disassembler.binja_get_bv_from_dock()
if not dctx:
disassembler.warning("Lighthouse requires an open BNDB to load coverage.")
return
super(LighthouseBinja, self).interactive_load_batch(dctx)
def _open_coverage_xref(self, context):
super(LighthouseBinja, self).open_coverage_xref(context.address, context.binaryView)
def _interactive_coverage_xref(self, context):
if context is None:
return
#
# this is a special case where we check if the ctx exists rather than
# blindly creating a new one. again, this is because binja may call
# this function at random times to decide whether it should display the
# XREF menu option.
#
# but asking whether or not the xref menu option should be shown is not
# a good indication of 'is the user actually using lighthouse' so we
# do not want this to be one that creates lighthouse contexts
#
dctx = context.binaryView
if not dctx:
return
dctx_id = ctypes.addressof(dctx.handle.contents)
lctx = self.lighthouse_contexts.get(dctx_id, None)
if not lctx:
return
#
# is there even any coverage loaded into lighthouse? if not, the user
# probably isn't even using it. so don't bother showing the xref action
#
if not lctx.director.coverage_names:
return
if context.view is None:
return
view = context.view
context_menu = view.contextMenu()
#
# Create a new, temporary Coverage Xref action to inject into the
# right click context menu that is being shown...
#
action = "Coverage Xref"
UIAction.registerAction(action)
action_handler = view.actionHandler()
action_handler.bindAction(action, UIAction(self._open_coverage_xref))
context_menu.addAction(action, "Plugins")
def _is_xref_valid(self, dctx, addr):
#
# this is a special case where we check if the ctx exists rather than
# blindly creating a new one. again, this is because binja may call
# this function at random times to decide whether it should display the
# XREF menu option.
#
# but asking whether or not the xref menu option should be shown is not
# a good indidication of 'is the user actually using lighthouse' so we
# do not want this to be one that creates lighthouse contexts
#
dctx_id = ctypes.addressof(dctx.handle.contents)
lctx = self.lighthouse_contexts.get(dctx_id, None)
if not lctx:
return False
# return True if there appears to be coverage loaded...
return bool(lctx.director.coverage_names)
def _open_coverage_overview(self, context):
dctx = disassembler.binja_get_bv_from_dock()
if not dctx:
disassembler.warning("Lighthouse requires an open BNDB to open the overview.")
return
super(LighthouseBinja, self).open_coverage_overview(dctx)
def _stub(self, context):
# XXX: This was added as a last minute bodge prior to releasing v0.9.3,
# it fixes a crash-on-close that was manifesting on binja macOS, when
# using a lambda instead of a concrete function/stub like this.
return None
#--------------------------------------------------------------------------
# Binja Actions
#--------------------------------------------------------------------------
ACTION_LOAD_FILE = "Lighthouse\\Load code coverage file..."
ACTION_LOAD_BATCH = "Lighthouse\\Load code coverage batch..."
ACTION_COVERAGE_XREF = "Lighthouse\\Coverage Xref"
ACTION_COVERAGE_OVERVIEW = "Lighthouse\\Open Coverage Overview"
def _install_load_file(self):
action = self.ACTION_LOAD_FILE
UIAction.registerAction(action)
UIActionHandler.globalActions().bindAction(action, UIAction(self._interactive_load_file))
Menu.mainMenu("Plugins").addAction(action, "Loading", 0)
logger.info("Installed the 'Code coverage file' menu entry")
def _install_load_batch(self):
action = self.ACTION_LOAD_BATCH
UIAction.registerAction(action)
UIActionHandler.globalActions().bindAction(action, UIAction(self._interactive_load_batch))
Menu.mainMenu("Plugins").addAction(action, "Loading", 1)
logger.info("Installed the 'Code coverage batch' menu entry")
def _install_open_coverage_xref(self):
action = self.ACTION_COVERAGE_XREF
UIAction.registerAction(action)
UIActionHandler.globalActions().bindAction(action, UIAction(self._stub, self._interactive_coverage_xref))
Menu.mainMenu("Plugins").addAction(action, "Loading", 2)
# NOTE/V35: Binja automatically creates View --> Show Coverage Overview
def _install_open_coverage_overview(self):
action = self.ACTION_COVERAGE_OVERVIEW
UIAction.registerAction(action)
UIActionHandler.globalActions().bindAction(action, UIAction(self._open_coverage_overview))
Menu.mainMenu("Plugins").addAction(action, "Windows", 0)
logger.info("Installed the 'Open Coverage Overview' menu entry")
# NOTE/V35: Binja doesn't really 'unload' plugins, so whatever...
def _uninstall_load_file(self):
pass
def _uninstall_load_batch(self):
pass
def _uninstall_open_coverage_xref(self):
pass
def _uninstall_open_coverage_overview(self):
pass
@@ -0,0 +1,32 @@
import logging
from lighthouse.util.log import lmsg
from lighthouse.integration.binja_integration import LighthouseBinja
logger = logging.getLogger("Lighthouse.Binja.Loader")
#------------------------------------------------------------------------------
# Lighthouse Binja Loader
#------------------------------------------------------------------------------
#
# The Binary Ninja plugin loading process is less involved compared to IDA.
#
# When Binary Ninja is starting up, it will import all python files placed
# in its root plugin folder. It will then attempt to import any *directory*
# in the plugin folder as a python module.
#
# For this reason, you may see Binary Ninja attempting to load 'lighthouse'
# and 'lighthouse_plugin' in your console. This is normal due to the way
# we have structured Lighthouse and its loading process.
#
# In practice, lighthouse_plugin.py will import the contents of this file,
# when Binary Ninja is starting up. As such, this is our only opportunity
# to load & integrate Lighthouse.
#
try:
lighthouse = LighthouseBinja()
lighthouse.load()
except Exception as e:
lmsg("Failed to initialize Lighthouse")
logger.exception("Exception details:")
+412
View File
@@ -0,0 +1,412 @@
import abc
import logging
import lighthouse
from lighthouse.util import lmsg
from lighthouse.util.qt import *
from lighthouse.util.update import check_for_update
from lighthouse.util.disassembler import disassembler
from lighthouse.ui import *
from lighthouse.metadata import metadata_progress
from lighthouse.exceptions import *
logger = logging.getLogger("Lighthouse.Core")
#------------------------------------------------------------------------------
# Lighthouse Plugin Core
#------------------------------------------------------------------------------
class LighthouseCore(object):
__metaclass__ = abc.ABCMeta
#--------------------------------------------------------------------------
# Plugin Metadata
#--------------------------------------------------------------------------
PLUGIN_VERSION = "0.9.4-DEV"
AUTHORS = "Markus Gaasedelen"
DATE = "2024"
#--------------------------------------------------------------------------
# Initialization
#--------------------------------------------------------------------------
def load(self):
"""
Load the plugin, and integrate its UI into the disassembler.
"""
self._update_checked = False
self.lighthouse_contexts = {}
# the plugin color palette
self.palette = LighthousePalette()
self.palette.theme_changed(self.refresh_theme)
def create_coverage_overview(name, parent, dctx):
lctx = self.get_context(dctx, startup=False)
widget = disassembler.create_dockable_widget(parent, name)
overview = CoverageOverview(lctx, widget)
return widget
# the coverage overview widget
disassembler.register_dockable("Coverage Overview", create_coverage_overview)
# install disassembler UI
self._install_ui()
# install entry point for headless / terminal access...
lighthouse.get_context = self.get_context
# plugin loaded successfully, print the plugin banner
self.print_banner()
logger.info("Successfully loaded plugin")
def unload(self):
"""
Unload the plugin, and remove any UI integrations.
"""
self._uninstall_ui()
# remove headless entry point
lighthouse.get_context = lambda x: None
# spin down any active contexts (stop threads, cleanup qt state, etc)
for lctx in self.lighthouse_contexts.values():
lctx.terminate()
logger.info("-"*75)
logger.info("Plugin terminated")
def print_banner(self):
"""
Print the plugin banner.
"""
# build the main banner title
banner_params = (self.PLUGIN_VERSION, self.AUTHORS, self.DATE)
banner_title = "v%s - (c) %s - %s" % banner_params
# print plugin banner
lmsg("Loaded %s" % banner_title)
#--------------------------------------------------------------------------
# Disassembler / Database Context Selector
#--------------------------------------------------------------------------
@abc.abstractmethod
def get_context(self, dctx, startup=True):
"""
Get the LighthouseContext object for a given database context.
"""
pass
#--------------------------------------------------------------------------
# UI Integration (Internal)
#--------------------------------------------------------------------------
def _install_ui(self):
"""
Initialize & integrate all plugin UI elements.
"""
self._install_load_file()
self._install_load_batch()
self._install_open_coverage_xref()
self._install_open_coverage_overview()
def _uninstall_ui(self):
"""
Cleanup & remove all plugin UI integrations.
"""
self._uninstall_open_coverage_overview()
self._uninstall_open_coverage_xref()
self._uninstall_load_batch()
self._uninstall_load_file()
@abc.abstractmethod
def _install_load_file(self):
"""
Install the 'File->Load->Code coverage file...' menu entry.
"""
pass
@abc.abstractmethod
def _install_load_batch(self):
"""
Install the 'File->Load->Code coverage batch...' menu entry.
"""
pass
@abc.abstractmethod
def _install_open_coverage_xref(self):
"""
Install the right click 'Coverage Xref' context menu entry.
"""
pass
@abc.abstractmethod
def _install_open_coverage_overview(self):
"""
Install the 'View->Open subviews->Coverage Overview' menu entry.
"""
pass
@abc.abstractmethod
def _uninstall_load_file(self):
"""
Remove the 'File->Load file->Code coverage file...' menu entry.
"""
pass
@abc.abstractmethod
def _uninstall_load_batch(self):
"""
Remove the 'File->Load file->Code coverage batch...' menu entry.
"""
pass
@abc.abstractmethod
def _uninstall_open_coverage_xref(self):
"""
Remove the right click 'Coverage Xref' context menu entry.
"""
pass
@abc.abstractmethod
def _uninstall_open_coverage_overview(self):
"""
Remove the 'View->Open subviews->Coverage Overview' menu entry.
"""
pass
#--------------------------------------------------------------------------
# UI Actions (Public)
#--------------------------------------------------------------------------
def refresh_theme(self):
"""
Refresh UI facing elements to reflect the current theme.
"""
for lctx in self.lighthouse_contexts.values():
lctx.director.refresh_theme()
if lctx.coverage_overview:
lctx.coverage_overview.refresh_theme()
lctx.painter.force_repaint()
def open_coverage_overview(self, dctx=None):
"""
Open the dockable 'Coverage Overview' dialog.
"""
lctx = self.get_context(dctx)
# the coverage overview is already open & visible, nothing to do
if lctx.coverage_overview and lctx.coverage_overview.visible:
return
# show the coverage overview
disassembler.show_dockable("Coverage Overview")
# trigger an update check (this should only ever really 'check' once)
self.check_for_update()
def open_coverage_xref(self, address, dctx=None):
"""
Open the 'Coverage Xref' dialog for a given address.
"""
lctx = self.get_context(dctx)
# show the coverage xref dialog
dialog = CoverageXref(lctx.director, address)
if not dialog.exec_():
return
# activate the user selected xref (if one was double clicked)
if dialog.selected_coverage:
lctx.director.select_coverage(dialog.selected_coverage)
return
# load a coverage file from disk
disassembler.show_wait_box("Loading coverage from disk...")
created_coverage, errors = lctx.director.load_coverage_files(
[dialog.selected_filepath],
disassembler.replace_wait_box
)
if not created_coverage:
lmsg("No coverage files could be loaded...")
disassembler.hide_wait_box()
warn_errors(errors)
return
disassembler.replace_wait_box("Selecting coverage...")
lctx.director.select_coverage(created_coverage[0].name)
disassembler.hide_wait_box()
def interactive_load_batch(self, dctx=None):
"""
Perform the user-interactive loading of a coverage batch.
"""
lctx = self.get_context(dctx)
#
# kick off an asynchronous metadata refresh. this will run in the
# background while the user is selecting which coverage files to load
#
future = lctx.metadata.refresh_async(progress_callback=metadata_progress)
#
# we will now prompt the user with an interactive file dialog so they
# can select the coverage files they would like to load from disk
#
filepaths = lctx.select_coverage_files()
if not filepaths:
lctx.director.metadata.abort_refresh()
return
# prompt the user to name the new coverage aggregate
default_name = "BATCH_%s" % lctx.director.peek_shorthand()
ok, batch_name = prompt_string(
"Batch Name:",
"Please enter a name for this coverage",
default_name
)
#
# if user didn't enter a name for the batch (or hit cancel) we should
# abort the loading process...
#
if not (ok and batch_name):
lmsg("User failed to enter a name for the batch coverage...")
lctx.director.metadata.abort_refresh()
return
#
# to begin mapping the loaded coverage data, we require that the
# asynchronous database metadata refresh has completed. if it is
# not done yet, we will block here until it completes.
#
# a progress dialog depicts the work remaining in the refresh
#
disassembler.show_wait_box("Building database metadata...")
lctx.metadata.go_synchronous()
await_future(future)
#
# now that the database metadata is available, we can use the director
# to normalize and condense (aggregate) all the coverage data
#
disassembler.replace_wait_box("Loading coverage from disk...")
batch_coverage, errors = lctx.director.load_coverage_batch(
filepaths,
batch_name,
disassembler.replace_wait_box
)
# if batch creation fails...
if not batch_coverage:
lmsg("Creation of batch '%s' failed..." % batch_name)
disassembler.hide_wait_box()
warn_errors(errors)
return
# select the newly created batch coverage
disassembler.replace_wait_box("Selecting coverage...")
lctx.director.select_coverage(batch_name)
# all done! pop the coverage overview to show the user their results
disassembler.hide_wait_box()
lmsg("Successfully loaded batch %s..." % batch_name)
self.open_coverage_overview(lctx.dctx)
# finally, emit any notable issues that occurred during load
warn_errors(errors, lctx.director.suppressed_errors)
def interactive_load_file(self, dctx=None):
"""
Perform the user-interactive loading of individual coverage files.
"""
lctx = self.get_context(dctx)
#
# kick off an asynchronous metadata refresh. this will run in the
# background while the user is selecting which coverage files to load
#
future = lctx.metadata.refresh_async(progress_callback=metadata_progress)
#
# we will now prompt the user with an interactive file dialog so they
# can select the coverage files they would like to load from disk
#
filenames = lctx.select_coverage_files()
if not filenames:
lctx.metadata.abort_refresh()
return
#
# to begin mapping the loaded coverage data, we require that the
# asynchronous database metadata refresh has completed. if it is
# not done yet, we will block here until it completes.
#
# a progress dialog depicts the work remaining in the refresh
#
disassembler.show_wait_box("Building database metadata...")
lctx.metadata.go_synchronous()
await_future(future)
#
# now that the database metadata is available, we can use the director
# to load and normalize the selected coverage files
#
disassembler.replace_wait_box("Loading coverage from disk...")
created_coverage, errors = lctx.director.load_coverage_files(filenames, disassembler.replace_wait_box)
#
# if the director failed to map any coverage, the user probably
# provided bad files. emit any warnings and bail...
#
if not created_coverage:
lmsg("No coverage files could be loaded...")
disassembler.hide_wait_box()
warn_errors(errors)
return
#
# activate the first of the newly loaded coverage file(s). this is the
# one that will be visible in the coverage overview once opened
#
disassembler.replace_wait_box("Selecting coverage...")
lctx.director.select_coverage(created_coverage[0].name)
# all done! pop the coverage overview to show the user their results
disassembler.hide_wait_box()
lmsg("Successfully loaded %u coverage file(s)..." % len(created_coverage))
self.open_coverage_overview(lctx.dctx)
# finally, emit any notable issues that occurred during load
warn_errors(errors, lctx.director.suppressed_errors)
def check_for_update(self):
"""
Check if there is an update available for Lighthouse.
"""
if self._update_checked:
return
# wrap the callback (a popup) to ensure it gets called from the UI
callback = disassembler.execute_ui(disassembler.warning)
# kick off the async update check
check_for_update(self.PLUGIN_VERSION, callback)
self._update_checked = True
@@ -0,0 +1,385 @@
import os
import logging
import idaapi
from lighthouse.context import LighthouseContext
from lighthouse.util.misc import plugin_resource
from lighthouse.integration.core import LighthouseCore
logger = logging.getLogger("Lighthouse.IDA.Integration")
#------------------------------------------------------------------------------
# Lighthouse IDA Integration
#------------------------------------------------------------------------------
class LighthouseIDA(LighthouseCore):
"""
Lighthouse UI Integration for IDA Pro.
"""
def __init__(self):
# menu entry icons
self._icon_id_xref = idaapi.BADADDR
self._icon_id_file = idaapi.BADADDR
self._icon_id_batch = idaapi.BADADDR
self._icon_id_overview = idaapi.BADADDR
# IDA ui hooks
self._ui_hooks = UIHooks(self)
# run initialization
super(LighthouseIDA, self).__init__()
def get_context(self, dctx=None, startup=True):
"""
Get the LighthouseContext object for a given database context.
NOTE: since IDA can only have one binary / IDB open at a time, the
dctx (database context) should always be 'None'.
"""
self.palette.warmup()
#
# there should only ever be 'one' disassembler / IDB context at any
# time for IDA. but if one does not exist yet, that means this is the
# first time the user has interacted with Lighthouse for this session
#
if dctx not in self.lighthouse_contexts:
# create a new 'context' representing this IDB
lctx = LighthouseContext(self, dctx)
if startup:
lctx.start()
# save the created ctx for future calls
self.lighthouse_contexts[dctx] = lctx
# return the lighthouse context object for this IDB
return self.lighthouse_contexts[dctx]
#--------------------------------------------------------------------------
# IDA Actions
#--------------------------------------------------------------------------
ACTION_LOAD_FILE = "lighthouse:load_file"
ACTION_LOAD_BATCH = "lighthouse:load_batch"
ACTION_COVERAGE_XREF = "lighthouse:coverage_xref"
ACTION_COVERAGE_OVERVIEW = "lighthouse:coverage_overview"
def _install_load_file(self):
"""
Install the 'File->Load->Code coverage file...' menu entry.
"""
# create a custom IDA icon
icon_path = plugin_resource(os.path.join("icons", "load.png"))
icon_data = open(icon_path, "rb").read()
self._icon_id_file = idaapi.load_custom_icon(data=icon_data)
# describe a custom IDA UI action
action_desc = idaapi.action_desc_t(
self.ACTION_LOAD_FILE, # The action name
"~C~ode coverage file...", # The action text
IDACtxEntry(self.interactive_load_file), # The action handler
None, # Optional: action shortcut
"Load individual code coverage file(s)", # Optional: tooltip
self._icon_id_file # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register load_file action with IDA")
# attach the action to the File-> dropdown menu
result = idaapi.attach_action_to_menu(
"File/Load file/", # Relative path of where to add the action
self.ACTION_LOAD_FILE, # The action ID (see above)
idaapi.SETMENU_APP # We want to append the action after ^
)
if not result:
RuntimeError("Failed action attach load_file")
logger.info("Installed the 'Code coverage file' menu entry")
def _install_load_batch(self):
"""
Install the 'File->Load->Code coverage batch...' menu entry.
"""
# create a custom IDA icon
icon_path = plugin_resource(os.path.join("icons", "batch.png"))
icon_data = open(icon_path, "rb").read()
self._icon_id_batch = idaapi.load_custom_icon(data=icon_data)
# describe a custom IDA UI action
action_desc = idaapi.action_desc_t(
self.ACTION_LOAD_BATCH, # The action name
"~C~ode coverage batch...", # The action text
IDACtxEntry(self.interactive_load_batch), # The action handler
None, # Optional: action shortcut
"Load and aggregate code coverage files", # Optional: tooltip
self._icon_id_batch # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register load_batch action with IDA")
# attach the action to the File-> dropdown menu
result = idaapi.attach_action_to_menu(
"File/Load file/", # Relative path of where to add the action
self.ACTION_LOAD_BATCH, # The action ID (see above)
idaapi.SETMENU_APP # We want to append the action after ^
)
if not result:
RuntimeError("Failed action attach load_batch")
logger.info("Installed the 'Code coverage batch' menu entry")
def _install_open_coverage_xref(self):
"""
Install the right click 'Coverage Xref' context menu entry.
"""
# create a custom IDA icon
icon_path = plugin_resource(os.path.join("icons", "batch.png"))
icon_data = open(icon_path, "rb").read()
self._icon_id_xref = idaapi.load_custom_icon(data=icon_data)
# describe a custom IDA UI action
action_desc = idaapi.action_desc_t(
self.ACTION_COVERAGE_XREF, # The action name
"Xrefs coverage sets...", # The action text
IDACtxEntry(self._pre_open_coverage_xref),# The action handler
None, # Optional: action shortcut
"List coverage sets containing this address", # Optional: tooltip
self._icon_id_xref # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register coverage_xref action with IDA")
self._ui_hooks.hook()
logger.info("Installed the 'Coverage Xref' menu entry")
def _install_open_coverage_overview(self):
"""
Install the 'View->Open subviews->Coverage Overview' menu entry.
"""
# create a custom IDA icon
icon_path = plugin_resource(os.path.join("icons", "overview.png"))
icon_data = open(icon_path, "rb").read()
self._icon_id_overview = idaapi.load_custom_icon(data=icon_data)
# describe a custom IDA UI action
action_desc = idaapi.action_desc_t(
self.ACTION_COVERAGE_OVERVIEW, # The action name
"~C~overage Overview", # The action text
IDACtxEntry(self.open_coverage_overview), # The action handler
None, # Optional: action shortcut
"Open database code coverage overview", # Optional: tooltip
self._icon_id_overview # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register open coverage overview action with IDA")
# attach the action to the View-> dropdown menu
result = idaapi.attach_action_to_menu(
"View/Open subviews/Hex dump", # Relative path of where to add the action
self.ACTION_COVERAGE_OVERVIEW, # The action ID (see above)
idaapi.SETMENU_INS # We want to insert the action before ^
)
if not result:
RuntimeError("Failed action attach to 'View/Open subviews' dropdown")
logger.info("Installed the 'Coverage Overview' menu entry")
def _uninstall_load_file(self):
"""
Remove the 'File->Load file->Code coverage file...' menu entry.
"""
# remove the entry from the File-> menu
result = idaapi.detach_action_from_menu(
"File/Load file/",
self.ACTION_LOAD_FILE
)
if not result:
return False
# unregister the action
result = idaapi.unregister_action(self.ACTION_LOAD_FILE)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_file)
self._icon_id_file = idaapi.BADADDR
logger.info("Uninstalled the 'Code coverage file' menu entry")
def _uninstall_load_batch(self):
"""
Remove the 'File->Load file->Code coverage batch...' menu entry.
"""
# remove the entry from the File-> menu
result = idaapi.detach_action_from_menu(
"File/Load file/",
self.ACTION_LOAD_BATCH
)
if not result:
return False
# unregister the action
result = idaapi.unregister_action(self.ACTION_LOAD_BATCH)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_batch)
self._icon_id_batch = idaapi.BADADDR
logger.info("Uninstalled the 'Code coverage batch' menu entry")
def _uninstall_open_coverage_xref(self):
"""
Remove the right click 'Coverage Xref' context menu entry.
"""
self._ui_hooks.unhook()
# unregister the action
result = idaapi.unregister_action(self.ACTION_COVERAGE_XREF)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_xref)
self._icon_id_xref = idaapi.BADADDR
logger.info("Uninstalled the 'Coverage Xref' menu entry")
def _uninstall_open_coverage_overview(self):
"""
Remove the 'View->Open subviews->Coverage Overview' menu entry.
"""
# remove the entry from the View-> menu
result = idaapi.detach_action_from_menu(
"View/Open subviews/Hex dump",
self.ACTION_COVERAGE_OVERVIEW
)
if not result:
return False
# unregister the action
result = idaapi.unregister_action(self.ACTION_COVERAGE_OVERVIEW)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_overview)
self._icon_id_overview = idaapi.BADADDR
logger.info("Uninstalled the 'Coverage Overview' menu entry")
#--------------------------------------------------------------------------
# Helpers
#--------------------------------------------------------------------------
def _inject_ctx_actions(self, view, popup, view_type):
"""
Inject context menu entries into IDA's right click menus.
NOTE: This is only being used for coverage xref at this time, but
may host additional actions in the future.
"""
if view_type == idaapi.BWN_DISASMS:
idaapi.attach_action_to_popup(
view,
popup,
self.ACTION_COVERAGE_XREF, # The action ID (see above)
"Xrefs graph from...", # Relative path of where to add the action
idaapi.SETMENU_APP # We want to append the action after ^
)
def _pre_open_coverage_xref(self):
"""
Grab a contextual address before opening the coverage xref dialog.
"""
self.open_coverage_xref(idaapi.get_screen_ea())
#------------------------------------------------------------------------------
# IDA UI Helpers
#------------------------------------------------------------------------------
class IDACtxEntry(idaapi.action_handler_t):
"""
A minimal context menu entry class to utilize IDA's action handlers.
"""
def __init__(self, action_function):
idaapi.action_handler_t.__init__(self)
self.action_function = action_function
def activate(self, ctx):
"""
Execute the embedded action_function when this context menu is invoked.
"""
self.action_function()
return 1
def update(self, ctx):
"""
Ensure the context menu is always available in IDA.
"""
return idaapi.AST_ENABLE_ALWAYS
class UIHooks(idaapi.UI_Hooks):
"""
Hooks for IDA's UI subsystem.
At the moment, we are only using these to inject into IDA's right click
context menus (eg, coverage xrefs)
"""
def __init__(self, integration):
self.integration = integration
super(UIHooks, self).__init__()
def finish_populating_widget_popup(self, widget, popup):
"""
A right click menu is about to be shown. (IDA 7.0+)
"""
#
# if lighthouse hasn't been used yet, there's nothing to do. we also
# don't want this event to trigger the creation of a lighthouse
# context! so we should bail early in this case...
#
if not self.integration.lighthouse_contexts:
return 0
# inject any of lighthouse's right click context menu's into IDA
lctx = self.integration.get_context(None)
if lctx.director.coverage_names:
self.integration._inject_ctx_actions(widget, popup, idaapi.get_widget_type(widget))
# must return 0 for ida...
return 0
@@ -0,0 +1,94 @@
import time
import logging
import idaapi
from lighthouse.util.log import lmsg
from lighthouse.integration.ida_integration import LighthouseIDA
logger = logging.getLogger("Lighthouse.IDA.Loader")
#------------------------------------------------------------------------------
# Lighthouse IDA Loader
#------------------------------------------------------------------------------
#
# This file contains a stub 'plugin' class for Lighthouse as required by
# IDA Pro. Practically speaking, there should be little to *no* logic placed
# in this file because it is disassembler-specific.
#
# When IDA Pro is starting up, it will import all python files placed in its
# root plugin folder. It will then attempt to call PLUGIN_ENTRY() on each of
# the imported 'plugins'. We import PLUGIN_ENTRY into lighthouse_plugin.py
# so that IDA can see it.
#
# PLUGIN_ENTRY() is expected to return a plugin object (LighthouseIDAPlugin)
# derived from idaapi.plugin_t. IDA will register the plugin, and interface
# with the plugin object to load / unload the plugin at certain times, per
# its configuration (flags, hotkeys).
#
# There should be virtually no reason for you to modify this file.
#
def PLUGIN_ENTRY():
"""
Required plugin entry point for IDAPython Plugins.
"""
return LighthouseIDAPlugin()
class LighthouseIDAPlugin(idaapi.plugin_t):
"""
The IDA plugin stub for Lighthouse.
"""
#
# Plugin flags:
# - PLUGIN_MOD: Lighthouse is a plugin that may modify the database
# - PLUGIN_PROC: Load/unload Lighthouse when an IDB opens / closes
# - PLUGIN_HIDE: Hide Lighthouse from the IDA plugin menu
#
flags = idaapi.PLUGIN_PROC | idaapi.PLUGIN_MOD | idaapi.PLUGIN_HIDE
comment = "Code Coverage Explorer"
help = ""
wanted_name = "Lighthouse"
wanted_hotkey = ""
#--------------------------------------------------------------------------
# IDA Plugin Overloads
#--------------------------------------------------------------------------
def init(self):
"""
This is called by IDA when it is loading the plugin.
"""
try:
self._lighthouse = LighthouseIDA()
self._lighthouse.load()
except Exception as e:
lmsg("Failed to initialize Lighthouse")
logger.exception("Exception details:")
return idaapi.PLUGIN_KEEP
def run(self, arg):
"""
This is called by IDA when this file is loaded as a script.
"""
idaapi.warning("Lighthouse cannot be run as a script in IDA.")
def term(self):
"""
This is called by IDA when it is unloading the plugin.
"""
logger.debug("IDA term started...")
start = time.time()
logger.debug("-"*50)
try:
self._lighthouse.unload()
self._lighthouse = None
except Exception as e:
logger.exception("Failed to cleanly unload Lighthouse from IDA.")
end = time.time()
logger.debug("-"*50)
logger.debug("IDA term done... (%.3f seconds...)" % (end-start))
File diff suppressed because it is too large Load Diff
+9
View File
@@ -0,0 +1,9 @@
from .painter import DatabasePainter
from lighthouse.util.disassembler import disassembler
if disassembler.NAME == "IDA":
from .ida_painter import IDAPainter as CoveragePainter
elif disassembler.NAME == "BINJA":
from .binja_painter import BinjaPainter as CoveragePainter
else:
raise NotImplementedError("DISASSEMBLER-SPECIFIC SHIM MISSING")
@@ -0,0 +1,131 @@
import logging
import binaryninja
from binaryninja import HighlightStandardColor
from binaryninja.highlight import HighlightColor
from lighthouse.painting import DatabasePainter
from lighthouse.util.disassembler import disassembler
logger = logging.getLogger("Lighthouse.Painting.Binja")
#------------------------------------------------------------------------------
# Binary Ninja Painter
#------------------------------------------------------------------------------
class BinjaPainter(DatabasePainter):
"""
Asynchronous Binary Ninja database painter.
"""
def __init__(self, lctx, director, palette):
super(BinjaPainter, self).__init__(lctx, director, palette)
#--------------------------------------------------------------------------
# Paint Primitives
#--------------------------------------------------------------------------
#
# NOTE:
# due to the manner in which Binary Ninja implements basic block
# (node) highlighting, there is almost no need to paint individual
# instructions. for now we, will simply make the main instruction
# painting function a no-op's
#
def _paint_instructions(self, instructions):
self._action_complete.set()
def _clear_instructions(self, instructions):
bv = disassembler[self.lctx].bv
state = bv.begin_undo_actions()
for address in instructions:
for func in bv.get_functions_containing(address):
func.set_auto_instr_highlight(address, HighlightStandardColor.NoHighlightColor)
self._painted_partial -= set(instructions)
self._painted_instructions -= set(instructions)
self._action_complete.set()
if hasattr(bv, "forget_undo_actions"):
bv.forget_undo_actions(state)
else:
bv.commit_undo_actions(state)
def _partial_paint(self, bv, instructions, color):
for address in instructions:
for func in bv.get_functions_containing(address):
func.set_auto_instr_highlight(address, color)
self._painted_partial |= set(instructions)
self._painted_instructions |= set(instructions)
def _paint_nodes(self, node_addresses):
bv = disassembler[self.lctx].bv
db_coverage = self.director.coverage
db_metadata = self.director.metadata
state = bv.begin_undo_actions()
r, g, b, _ = self.palette.coverage_paint.getRgb()
color = HighlightColor(red=r, green=g, blue=b)
partial_nodes = set()
for node_address in node_addresses:
node_metadata = db_metadata.nodes.get(node_address, None)
node_coverage = db_coverage.nodes.get(node_address, None)
# read comment in ida_painter.py (self._paint_nodes)
if not (node_coverage and node_metadata):
self._msg_queue.put(self.MSG_ABORT)
node_addresses = node_addresses[:node_addresses.index(node_address)]
break
# special case for nodes that are only partially executed...
if node_coverage.instructions_executed != node_metadata.instruction_count:
partial_nodes.add(node_address)
self._partial_paint(bv, node_coverage.executed_instructions.keys(), color)
continue
for node in bv.get_basic_blocks_starting_at(node_address):
node.highlight = color
self._painted_nodes |= (set(node_addresses) - partial_nodes)
self._action_complete.set()
if hasattr(bv, "forget_undo_actions"):
bv.forget_undo_actions(state)
else:
bv.commit_undo_actions(state)
def _clear_nodes(self, node_addresses):
bv = disassembler[self.lctx].bv
db_metadata = self.director.metadata
state = bv.begin_undo_actions()
for node_address in node_addresses:
node_metadata = db_metadata.nodes.get(node_address, None)
# read comment in ida_painter.py (self._paint_nodes)
if not node_metadata:
self._msg_queue.put(self.MSG_ABORT)
node_addresses = node_addresses[:node_addresses.index(node_address)]
break
for node in bv.get_basic_blocks_starting_at(node_address):
node.highlight = HighlightStandardColor.NoHighlightColor
self._painted_nodes -= set(node_addresses)
self._action_complete.set()
if hasattr(bv, "forget_undo_actions"):
bv.forget_undo_actions(state)
else:
bv.commit_undo_actions(state)
def _refresh_ui(self):
pass
def _cancel_action(self, job):
pass
+474
View File
@@ -0,0 +1,474 @@
import struct
import ctypes
import logging
import functools
import idc
import idaapi
from idaapi import clr_abits, set_abits, netnode, set_node_info
from lighthouse.util import *
from lighthouse.util.disassembler import disassembler
from lighthouse.util.disassembler.ida_api import map_line2citem, map_line2node, lex_citem_indexes, hexrays_available
from lighthouse.painting import DatabasePainter
logger = logging.getLogger("Lighthouse.Painting.IDA")
#------------------------------------------------------------------------------
# MFF_NOWAIT Workaound
#------------------------------------------------------------------------------
#
# due to the asynchronous nature of the database painter core, we need
# to use IDA's execute_sync() with idaapi.MFF_WRITE to perform 'paint'
# actions (which modify the database).
#
# 1. the first issue is that a deadlock can occur when attempting to use
# execute_sync() with MFF_WRITE from a thread when IDA is in the process
# of closing. This would occur when a paint was in progress, and the user
# attempts to abruptly close the database.
#
# the solution to this is to use the MFF_NOWAIT flag with MFF_WRITE, which
# means execute_sync() is non-blocking. this avoids the deadlock that could
# occur between the main thread and the async (painting) thread on close.
#
# 2. but prior to IDA 7.0 SP1, there was a bug with execute_sync() that
# could cause IDA to abort() non-deterministically when using the
# MFF_NOWAIT flag. The abort manifests as a hard-crash of IDA.
#
# this is an issue that has haunted lighthouse since almost the beginning,
# causing a number of compatibility/stability issues. the bug was fixed in
# in IDA 7.0 SP1, but older versions of IDA are afflicted.
#
# this section of code constitutes some of the most fragile, convoluted,
# and regression prone code in lighthouse. through some miraculous feats
# of engineering, the solution below appears to safely resolve both of
# these problems for downlevel versions (IDA 6.8 --> 7.x)
#
from lighthouse.util.qt import QtCore
class ToMainthread(QtCore.QObject):
"""
A Qt object whose sole purpose is to execute code on the mainthread.
Below, we define a Qt signal called 'mainthread'. Any thread can emit() this
signal, where it will be handled in the main application thread.
"""
mainthread = QtCore.pyqtSignal(object)
def __init__(self):
super(ToMainthread, self).__init__()
#
# from any thread, one can call 'mainthread.emit(a_function)', passing
# in a callable object (a_function) which will be executed (through the
# lambda) on the main application thread.
#
self.mainthread.connect(lambda x: x())
def execute_paint(function):
"""
A function decorator to safely paint the IDA database from any thread.
"""
@functools.wraps(function)
def wrapper(*args, **kwargs):
#
# the first argument passed to this decorator will be the
# IDAPainter class instance
#
ida_painter = args[0]
#
# we wrap up the remaining args (and paint function) into a single
# packaged up callable object (a functools.partial)
#
ff = functools.partial(function, *args, **kwargs)
#
# if we are using a 'bugged' downlevel version of IDA, package another
# callable to 'synchronize' a database write. This callable will get
# passed to the main thread and executed through the Qt event loop.
#
# the execute_sync should technically happy in-line, avoiding the
# possibility of deadlocks or aborts as described above.
#
if idaapi.IDA_SDK_VERSION < 710:
fff = functools.partial(idaapi.execute_sync, ff, idaapi.MFF_WRITE)
ida_painter._signal.mainthread.emit(fff)
return idaapi.BADADDR
#
# in IDA 7.1, the MFF_NOWAIT bug is definitely fixed, so we can just
# use it to schedule our paint action ... as designed.
#
return idaapi.execute_sync(ff, idaapi.MFF_NOWAIT | idaapi.MFF_WRITE)
return wrapper
#------------------------------------------------------------------------------
# IDA Painter
#------------------------------------------------------------------------------
class IDAPainter(DatabasePainter):
"""
Asynchronous IDA database painter.
"""
def __init__(self, lctx, director, palette):
super(IDAPainter, self).__init__(lctx, director, palette)
self._streaming_instructions = True
self._idp_hooks = InstructionPaintHooks(director, palette)
self._vduis = {}
# see the MFF_NOWAIT workaround details above
self._signal = ToMainthread()
def terminate(self):
#
# IDA is either closing or simply switching databases... we should try
# to unhook our processor hooks so that artifacts of this painter do
# not carry over to the next IDB / session.
#
# if we don't do this, our current 'IDP' hooks will continue to fire
# once the next IDB is open. we don't want this, because a new painter
# will be spun up an it will install its own instance of hooks...
#
if self._idp_hooks:
self._idp_hooks.unhook()
self._idp_hooks = None
# spin down the painter as usual
super(IDAPainter, self).terminate()
def _notify_status_changed(self, status):
# enable / disable hook based on the painter being enabled or disabled
if status:
self._idp_hooks.hook()
if hexrays_available():
idaapi.install_hexrays_callback(self._hxe_callback)
else:
self._idp_hooks.unhook()
if hexrays_available():
idaapi.remove_hexrays_callback(self._hxe_callback)
# send the status changed signal...
super(IDAPainter, self)._notify_status_changed(status)
#------------------------------------------------------------------------------
# Paint Actions
#------------------------------------------------------------------------------
@execute_paint
def _paint_instructions(self, instructions):
"""
Paint instruction level coverage defined by the current database mapping.
NOTE: we now use 'streaming' mode for instructions rather than this.
"""
color = struct.pack("I", self.palette.coverage_paint+1)
for address in instructions:
set_abits(address, 0x40000)
nn = netnode(address)
nn.supset(20, color, 'A')
self._painted_instructions |= set(instructions)
self._action_complete.set()
@execute_paint
def _clear_instructions(self, instructions):
"""
Clear paint from the given instructions.
NOTE: we now use 'streaming' mode for instructions rather than this.
"""
for address in instructions:
clr_abits(address, 0x40000)
self._painted_instructions -= set(instructions)
self._action_complete.set()
@execute_paint
def _paint_nodes(self, node_addresses):
"""
Paint node level coverage defined by the current database mappings.
"""
db_coverage = self.director.coverage
db_metadata = self.director.metadata
# create a node info object as our vehicle for setting the node color
node_info = idaapi.node_info_t()
node_info.bg_color = self.palette.coverage_paint
node_flags = idaapi.NIF_BG_COLOR | idaapi.NIF_FRAME_COLOR
#
# loop through every node that we have coverage data for, painting them
# in the IDA graph view as applicable.
#
for node_address in node_addresses:
# retrieve all the necessary structures to paint this node
node_coverage = db_coverage.nodes.get(node_address, None)
functions = db_metadata.get_functions_by_node(node_address)
#
# due to the fact that multiple functions may 'share' a node,
# we need to go through and explicitly fetch the node metadata
# from each function when performing a paint.
#
# this is because each function will have a unique node_id in
# the target node_metadata(s)
#
node_metadatas = {}
for function in functions:
# attempt to safely fetch the node metadata from a function
node_metadata = function.nodes.get(node_address, None)
#
# this is possible if function is getting torn down. this is because
# we don't use locks. this just means it is time for us to bail as
# the metadata state is changing and the paint should be canceled
#
if not node_metadata:
node_metadatas = []
break
node_metadatas[function.address] = node_metadata
#
# if we did not get *everything* that we needed, then it is
# possible the database changed, or the coverage set changed...
#
# this is kind of what we get for not using locks :D but that's
# okay, just stop painting here and let the painter sort it out
#
if not (node_coverage and node_metadatas):
self._msg_queue.put(self.MSG_ABORT)
node_addresses = node_addresses[:node_addresses.index(node_address)]
break
# ignore nodes that are only partially executed
if node_coverage.instructions_executed != node_metadata.instruction_count:
continue
# do the *actual* painting o;f a single node instance
for function_address, node_metadata in iteritems(node_metadatas):
set_node_info(
function_address,
node_metadata.id,
node_info,
node_flags
)
self._painted_nodes |= set(node_addresses)
self._action_complete.set()
@execute_paint
def _clear_nodes(self, node_addresses):
"""
Clear paint from the given graph nodes.
"""
db_metadata = self.director.metadata
# create a node info object as our vehicle for resetting the node color
node_info = idaapi.node_info_t()
node_info.bg_color = idc.DEFCOLOR
node_flags = idaapi.NIF_BG_COLOR | idaapi.NIF_FRAME_COLOR
#
# loop through every node that we have metadata data for, clearing
# their paint (color) in the IDA graph view as applicable.
#
# read self._paint_nodes() comments for more info, the code below
# is very similar, sans the repetitive comments
#
for node_address in node_addresses:
functions = db_metadata.get_functions_by_node(node_address)
node_metadatas = {}
for function in functions:
node_metadata = function.nodes.get(node_address, None)
if not node_metadata:
node_metadatas = {}
break
node_metadatas[function.address] = node_metadata
# abort if something looks like it changed...
if not node_metadatas:
self._msg_queue.put(self.MSG_ABORT)
node_addresses = node_addresses[:node_addresses.index(node_address)]
break
# do the *actual* painting of a single node instance
for function_address, node_metadata in iteritems(node_metadatas):
set_node_info(
function_address,
node_metadata.id,
node_info,
node_flags
)
self._painted_nodes -= set(node_addresses)
self._action_complete.set()
@execute_paint
def _refresh_ui(self):
"""
Note that this has been decorated with @execute_paint (vs @execute_ui)
to help avoid deadlocking on exit.
"""
for vdui in self._vduis.values():
if vdui.valid():
vdui.refresh_ctext(False)
idaapi.refresh_idaview_anyway()
def _cancel_action(self, job_id):
if idaapi.IDA_SDK_VERSION < 710:
return
idaapi.cancel_exec_request(job_id)
#------------------------------------------------------------------------------
# Painting - HexRays (Decompilation / Source)
#------------------------------------------------------------------------------
def paint_hexrays(self, cfunc, db_coverage):
"""
Paint decompilation text for the given HexRays Window.
"""
logger.debug("Painting HexRays for 0x%X" % cfunc.entry_ea)
# more code-friendly, readable aliases
db_metadata = db_coverage._metadata
decompilation_text = cfunc.get_pseudocode()
#
# the objective here is to paint hexrays lines that are associated with
# our runtime data. unfortunately, there are very few API resources that
# link decompilation line numbers to anything (eg, citems, nodes, ea, etc)
#
# this means that we must build our own data relationships to draw from
#
#
# first, let's build a relationship between a given line of text, and the
# citems that contribute to it. the only way to do that (as I see it) is
# to lex citem ID's out of the decompiled output string
#
line2citem = map_line2citem(decompilation_text)
#
# now that we have some understanding of how citems contribute to each
# line of decompiled text, we can use this information to build a
# relationship that ties graph nodes (basic blocks) to individual lines.
#
line2node = map_line2node(cfunc, db_metadata, line2citem)
# great, now we have all the information we need to paint
#
# paint hexrays output
#
lines_painted = 0
# extract the node addresses that have been hit by our function's mapping data
executed_nodes = set(viewkeys(db_coverage.functions[cfunc.entry_ea].nodes))
#
# now we loop through every line_number of the decompiled text that claims
# to have a relationship with a graph node (basic block) and check to see
# if it contains a node our coverage has marked as executed
#
for line_number, line_nodes in iteritems(line2node):
#
# if there is any intersection of nodes on this line and the coverage
# data's set of executed nodes, we are inclined to color it
#
if line_nodes & executed_nodes:
decompilation_text[line_number].bgcolor = self.palette.coverage_paint
lines_painted += 1
#
# done painting from our mapping data
#
# if there was nothing painted yet, there's no point in continuing...
if not lines_painted:
return
#
# if we made it this far, we must have painted *some* lines inside the
# function. that means we should paint the function declaration, and
# header (variable declaration) lines as their execution will be implied
#
for line_number in xrange(0, cfunc.hdrlines):
decompilation_text[line_number].bgcolor = self.palette.coverage_paint
lines_painted += 1
def _hxe_callback(self, event, *args):
"""
HexRays event handler.
"""
# decompilation text generation is complete and it is about to be shown
if event == idaapi.hxe_text_ready:
# more code-friendly, readable aliases
vdui = args[0]
cfunc = vdui.cfunc
self._vduis[vdui.view_idx] = vdui
# if there's no coverage data for this function, there's nothing to do
if not cfunc.entry_ea in self.director.coverage.functions:
return 0
# paint the decompilation text for this function
self.paint_hexrays(cfunc, self.director.coverage)
# stop tracking vdui's if they close...
elif event == idaapi.hxe_close_pseudocode:
vdui = args[0]
self._vduis.pop(vdui.view_idx, None)
return 0
#------------------------------------------------------------------------------
# Instruction Paint Streaming (Processor Hooks)
#------------------------------------------------------------------------------
class InstructionPaintHooks(idaapi.IDP_Hooks):
"""
Hook IDA's processor callbacks to paint instructions on the fly.
"""
def __init__(self, director, palette):
super(InstructionPaintHooks, self).__init__()
self.director = director
self.palette = palette
def ev_get_bg_color(self, pcolor, ea):
if ea not in self.director.coverage.coverage:
return 0
bgcolor = ctypes.cast(int(pcolor), ctypes.POINTER(ctypes.c_int))
bgcolor[0] = self.palette.coverage_paint
return 1
+671
View File
@@ -0,0 +1,671 @@
import abc
import time
import logging
import threading
from lighthouse.util import *
from lighthouse.util.debug import catch_errors
from lighthouse.coverage import FunctionCoverage
logger = logging.getLogger("Lighthouse.Painting")
class DatabasePainter(object):
"""
An asynchronous disassembler database painting engine.
"""
__metaclass__ = abc.ABCMeta
MSG_ABORT = -1
MSG_TERMINATE = 0
MSG_REPAINT = 1
MSG_FORCE_REPAINT = 2
MSG_CLEAR = 3
MSG_FORCE_CLEAR = 4
MSG_REBASE = 5
def __init__(self, lctx, director, palette):
#----------------------------------------------------------------------
# Misc
#----------------------------------------------------------------------
self.lctx = lctx
self.palette = palette
self.director = director
self._enabled = False
self._started = False
#----------------------------------------------------------------------
# Painted State
#----------------------------------------------------------------------
#
# the coverage painter maintains its own internal record of what
# instruction addresses and graph nodes it has painted.
#
self._imagebase = BADADDR
self._painted_nodes = set()
self._painted_partial = set()
self._painted_instructions = set()
#
# these toggles will let the core painter (this class) know that it
# does not have to order explicit paints of instructions or nodes.
#
# this is because a disassembler-specific painter may be able to hook
# unique callbacks for painting graphs nodes or instructions
# 'on-the-fly' as they are rendered.
#
# these types of paints are ephermal and the most performant, they
# also will not need to be tracked by the painter.
#
self._streaming_nodes = False
self._streaming_instructions = False
#----------------------------------------------------------------------
# Async
#----------------------------------------------------------------------
#
# to communicate with the asynchronous painting thread, we send a
# a message via the thread event to signal a new paint request, and
# use the repaint_requested bool to interrupt a running paint request.
#
self._action_complete = threading.Event()
self._msg_queue = queue.Queue()
self._end_threads = False
#
# asynchronous database painting thread
#
self._painting_worker = threading.Thread(
target=self._async_database_painter,
name="DatabasePainter"
)
#----------------------------------------------------------------------
# Callbacks
#----------------------------------------------------------------------
# painter callbacks
self._status_changed_callbacks = []
# register for cues from the director
self.director.coverage_switched(self.repaint)
self.director.coverage_modified(self.repaint)
self.director.refreshed(self.check_rebase)
def start(self):
"""
Start the painter.
"""
if self._started:
return
# start the painter thread
self._painting_worker.start()
# all done
self._started = True
self.set_enabled(True)
#--------------------------------------------------------------------------
# Status
#--------------------------------------------------------------------------
@property
def enabled(self):
"""
Return the active painting status of the painter.
"""
return self._enabled
def set_enabled(self, enabled):
"""
Enable or disable the painter.
"""
# enabled/disabled status is not changing, ignore...
if enabled == self._enabled:
return
lmsg("%s painting..." % ("Enabling" if enabled else "Disabling"))
self._enabled = enabled
# notify listeners that the painter has been enabled/disabled
self._notify_status_changed(enabled)
# paint or clear the database based on the change of status...
if enabled:
self._send_message(self.MSG_REPAINT)
else:
self._send_message(self.MSG_CLEAR)
#--------------------------------------------------------------------------
# Commands
#--------------------------------------------------------------------------
def terminate(self):
"""
Cleanup & terminate the painter.
"""
self._end_threads = True
self._msg_queue.put(self.MSG_TERMINATE)
try:
self._painting_worker.join()
except RuntimeError: # thread was never started...
pass
# best effort to free up resources & improve interpreter spindown
del self._painted_nodes
del self._painted_instructions
del self._status_changed_callbacks
def repaint(self):
"""
Paint coverage defined by the current database mappings.
"""
self._send_message(self.MSG_REPAINT)
def force_repaint(self):
"""
Force a coverage repaint of the current database mappings.
"""
self._send_message(self.MSG_FORCE_REPAINT)
def force_clear(self):
"""
Clear all paint from the current database (based on metadata)
"""
self._send_message(self.MSG_FORCE_CLEAR)
self.set_enabled(False)
def check_rebase(self):
"""
Perform a rebase on the painted data cache (if necessary).
"""
self._send_message(self.MSG_REBASE)
self._send_message(self.MSG_REPAINT)
def _send_message(self, message):
"""
Queue a painter command for execution.
"""
if not self._started:
return
self._msg_queue.put(message)
#--------------------------------------------------------------------------
# Commands
#--------------------------------------------------------------------------
def status_changed(self, callback):
"""
Subscribe a callback for coverage switch events.
"""
register_callback(self._status_changed_callbacks, callback)
def _notify_status_changed(self, status):
"""
Notify listeners of a coverage switch event.
"""
notify_callback(self._status_changed_callbacks, status)
#--------------------------------------------------------------------------
# Paint Primitives
#--------------------------------------------------------------------------
@abc.abstractmethod
def _paint_instructions(self, instructions):
"""
Paint instruction coverage defined by the current database mapping.
"""
pass
@abc.abstractmethod
def _clear_instructions(self, instructions):
"""
Clear paint from the given instructions.
"""
pass
@abc.abstractmethod
def _paint_nodes(self, nodes_coverage):
"""
Paint node coverage defined by the current database mappings.
"""
pass
@abc.abstractmethod
def _clear_nodes(self, nodes_metadata):
"""
Clear paint from the given graph nodes.
"""
pass
@abc.abstractmethod
def _refresh_ui(self):
"""
Refresh the disassembler UI to ensure paint is rendered.
"""
pass
@abc.abstractmethod
def _cancel_action(self, job):
"""
Cancel a paint action using something representing its job.
"""
pass
#------------------------------------------------------------------------------
# Painting - High Level
#------------------------------------------------------------------------------
def _priority_paint(self):
"""
Immediately repaint regions of the database visible to the user.
Return True upon completion, or False if interrupted.
"""
if self._streaming_instructions and self._streaming_nodes:
return True
# get current function / user location in the database
cursor_address = disassembler[self.lctx].get_current_address()
# attempt to paint the functions in the immediate cursor vicinity
result = self._priority_paint_functions(cursor_address)
# force a refresh *now* as this is a prority painting
self._refresh_ui()
# all done
return result
def _priority_paint_functions(self, target_address, neighbors=1):
"""
Paint functions in the immediate vicinity of the given address.
This will paint both the instructions & graph nodes of defined functions.
"""
db_metadata = self.director.metadata
db_coverage = self.director.coverage
blank_coverage = FunctionCoverage(BADADDR)
# get the function metadata for the function closest to our cursor
function_metadata = db_metadata.get_closest_function(target_address)
if not function_metadata:
return False
# select the range of functions around us that we would like to paint
func_num = db_metadata.get_function_index(function_metadata.address)
func_num_start = max(func_num - neighbors, 0)
func_num_end = min(func_num + neighbors + 1, len(db_metadata.functions) - 1)
# repaint the specified range of functions
for current_num in xrange(func_num_start, func_num_end):
# get the next function to paint
function_metadata = db_metadata.get_function_by_index(current_num)
if not function_metadata:
continue
# get the function coverage data for the target address
function_address = function_metadata.address
function_coverage = db_coverage.functions.get(function_address, blank_coverage)
if not self._streaming_nodes:
# clear nodes
must_clear = sorted(set(function_metadata.nodes) - set(function_coverage.nodes))
self._action_complete.clear()
self._clear_nodes(must_clear)
self._action_complete.wait()
# paint nodes
must_paint = sorted(function_coverage.nodes)
self._action_complete.clear()
self._paint_nodes(must_paint)
self._action_complete.wait()
if not self._streaming_instructions:
# clear instructions
must_clear = sorted(function_metadata.instructions - function_coverage.instructions)
self._action_complete.clear()
self._clear_instructions(must_clear)
self._action_complete.wait()
# paint instructions
must_paint = sorted(function_coverage.instructions)
self._action_complete.clear()
self._paint_instructions(must_paint)
self._action_complete.wait()
# paint finished successfully
return True
def _paint_database(self):
"""
Repaint the current database based on the current state.
"""
logger.debug("Painting database...")
# more code-friendly, readable aliases (db_XX == database_XX)
db_coverage = self.director.coverage
db_metadata = self.director.metadata
start = time.time()
#------------------------------------------------------------------
# initialize imagebase if it hasn't been already...
if self._imagebase == BADADDR:
self._imagebase = db_metadata.imagebase
# immediately paint user-visible regions of the database
if not self._priority_paint():
return False # a repaint was requested
#
# if the painter is not capable of 'streaming' the coverage paint,
# then we must explicitly paint the instructions & nodes here
#
if not self._streaming_instructions:
#
# TODO: 'partially painted nodes' might be a little funny / not
# working correctly in IDA if we ever disable instruction streaming...
#
# compute the painted instructions that will not get painted over
stale_instr = self._painted_instructions - db_coverage.coverage
stale_instr |= (self._painted_partial - db_coverage.partial_instructions)
# clear old instruction paint
if not self._async_action(self._clear_instructions, stale_instr):
return False # a repaint was requested
# paint new instructions
new_instr = sorted(db_coverage.coverage - self._painted_instructions)
if not self._async_action(self._paint_instructions, new_instr):
return False # a repaint was requested
if not self._streaming_nodes:
# compute the painted nodes that will not get painted over
stale_nodes = self._painted_nodes - viewkeys(db_coverage.nodes)
stale_nodes |= db_coverage.partial_nodes
# clear old node paint
if not self._async_action(self._clear_nodes, stale_nodes):
return False # a repaint was requested
# paint new nodes
new_nodes = sorted(viewkeys(db_coverage.nodes) - self._painted_nodes)
if not self._async_action(self._paint_nodes, new_nodes):
return False # a repaint was requested
#------------------------------------------------------------------
end = time.time()
logger.debug(" - Painting took %.2f seconds" % (end - start))
# paint finished successfully
return True
def _clear_database(self):
"""
Clear all paint from the current database using the known paint state.
"""
logger.debug("Clearing database paint...")
start = time.time()
#------------------------------------------------------------------
db_metadata = self.director.metadata
# clear all instructions
if not self._streaming_instructions:
if not self._async_action(self._clear_instructions, self._painted_instructions):
return False # a repaint was requested
# clear all nodes
if not self._streaming_nodes:
if not self._async_action(self._clear_nodes, self._painted_nodes):
return False # a repaint was requested
#------------------------------------------------------------------
end = time.time()
logger.debug(" - Database paint cleared in %.2f seconds..." % (end-start))
# sanity checks...
assert self._painted_nodes == set()
assert self._painted_instructions == set()
# paint finished successfully
return True
def _force_paint_database(self):
"""
Forcibly repaint the database.
"""
db_metadata = self.director.metadata
text = "Repainting the database..."
logger.debug(text)
is_modal = bool(disassembler.NAME != "IDA")
disassembler.execute_ui(disassembler.show_wait_box)(text, False)
start = time.time()
#------------------------------------------------------------------
# discard current / known paint state
self._painted_nodes = set()
self._painted_partial = set()
self._painted_instructions = set()
# paint the database...
self._paint_database()
#------------------------------------------------------------------
end = time.time()
logger.debug(" - Database repainted in %.2f seconds..." % (end-start))
time.sleep(.2) # XXX: this seems to fix a bug where the waitbox doesn't close if the paint is too fast??
disassembler.execute_ui(disassembler.hide_wait_box)()
# paint finished successfully
return True
def _force_clear_database(self):
"""
Forcibly clear the paint from all known database addresses.
"""
db_metadata = self.director.metadata
text = "Forcibly clearing all paint from database..."
logger.debug(text)
#
# NOTE: forcefully clearing the database of paint can take a long time
# in certain cases, so we want to block the user from doing anything
# to the database while we're working.
#
# we will pop up a waitbox to block them, but we have to be careful as
# a *modal* waitbox will conflict with IDA's processing of MFF_WRITE
# requests making it wait for the waitbox to close before processing
#
# therefore, we put in a little bodge wire here to make sure the
# waitbox is *not* modal for IDA... but will be in the normal case.
# it also helps that IDA will be busy processing our 'write' requests,
# so the UI will be mostly frozen to the user anyway!
#
is_modal = bool(disassembler.NAME != "IDA")
disassembler.execute_ui(disassembler.show_wait_box)(text, is_modal)
start = time.time()
#------------------------------------------------------------------
self._action_complete.clear()
self._clear_instructions(sorted(db_metadata.instructions))
self._action_complete.wait()
self._action_complete.clear()
self._clear_nodes(sorted(db_metadata.nodes))
self._action_complete.wait()
#------------------------------------------------------------------
end = time.time()
logger.debug(" - Database paint cleared in %.2f seconds..." % (end-start))
time.sleep(.2) # XXX: this seems to fix a bug where the waitbox doesn't close if the clear is too fast??
disassembler.execute_ui(disassembler.hide_wait_box)()
# paint finished successfully
return True
def _rebase_database(self):
"""
Rebase the active database paint.
TODO/XXX: there may be some edgecases where painting can be wrong if
a rebase occurs while the painter is running.
"""
db_metadata = self.director.metadata
# a rebase has not occurred
if not db_metadata.cached or (db_metadata.imagebase == self._imagebase):
return False
# compute the offset of the rebase
rebase_offset = db_metadata.imagebase - self._imagebase
# rebase the cached addresses of what we have painted
self._painted_nodes = set([address+rebase_offset for address in self._painted_nodes])
self._painted_instructions = set([address+rebase_offset for address in self._painted_instructions])
self._imagebase = db_metadata.imagebase
# a rebase has been observed
return True
#--------------------------------------------------------------------------
# Asynchronous Painting
#--------------------------------------------------------------------------
@catch_errors
def _async_database_painter(self):
"""
Asynchronous database painting worker loop.
"""
logger.debug("Starting DatabasePainter thread...")
#
# Asynchronous Database Painting Loop
#
while not self._end_threads:
# wait for the next command to come through
action = self._msg_queue.get()
# repaint the database based on the current state
if action == self.MSG_REPAINT:
result = self._paint_database()
# forcibly repaint the database based on the current state
elif action == self.MSG_FORCE_REPAINT:
result = self._force_paint_database()
# clear database base on the current state
elif action == self.MSG_CLEAR:
result = self._clear_database()
# clear all possible database paint
elif action == self.MSG_FORCE_CLEAR:
result = self._force_clear_database()
# check for a rebase of the painted data
elif action == self.MSG_REBASE:
result = self._rebase_database()
# thrown internally to escape a stale paint, just ignore
elif action == self.MSG_ABORT:
continue
# spin down the painting thread (this thread)
elif action == self.MSG_TERMINATE:
break
# unknown command
else:
logger.error("UNKNOWN COMMAND! %s" % str(action))
break
# refresh the UI to ensure paint changes are rendered
self._refresh_ui()
# thread exit
logger.debug("Exiting DatabasePainter thread...")
def _async_action(self, paint_action, work_iterable):
"""
Split a normal paint routine into interruptable chunks.
Internal routine for asynchrnous painting.
"""
CHUNK_SIZE = 1500 # somewhat arbitrary
# split the given nodes into multiple paints
for work_chunk in chunks(list(work_iterable), CHUNK_SIZE):
#
# reset the paint event signal so that it is ready for the next
# paint request. it will let us know when the asynchrnous paint
# action has completed in the IDA main thread
#
self._action_complete.clear()
#
# paint or unpaint a chunk of 'work' (nodes, or instructions) with
# the given paint function (eg, paint_nodes, clear_instructions)
#
paint_job = paint_action(work_chunk)
#
# wait for the asynchronous paint event to complete or a signal that
# we should end this thread (via end_threads)
#
while not (self._action_complete.wait(timeout=0.2) or self._end_threads):
continue
#
# our end_threads signal/bool can only originate from the main IDA
# thread (plugin termination). we make the assumption that no more
# MFF_WRITE requests (eg, 'paint_action') will get processed.
#
# we do a best effort to cancel the in-flight job (just in case)
# and return so we can exit the thread.
#
if self._end_threads:
self._cancel_action(paint_job)
return False
#
# the operation has been interrupted by a repaint request, bail
# immediately so that we can process the next repaint
#
if not self._msg_queue.empty():
return False
# operation completed successfully
return True
+1
View File
@@ -0,0 +1 @@
from .coverage_reader import CoverageReader
+114
View File
@@ -0,0 +1,114 @@
import abc
class CoverageFile(object):
"""
Templated class for Lighthouse-compatible code coverage file reader.
"""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def __init__(self, filepath=None):
self.filepath = filepath
self.modules = {}
self._parse()
#--------------------------------------------------------------------------
# Parsing Routines
#--------------------------------------------------------------------------
@abc.abstractmethod
def _parse(self):
"""
Load and parse coverage data from the file defined by self.filepath
Within this function, a custom CoverageFile is expected to attempt to
parse the coverage file from disk. If the coverage file does not appear
to match the format expected by this parser -- that is okay.
Should this parser crash and burn, the CoverageReader will simply move
on to the next available parser and discard this attempt.
This function should *only* parse & categorize the coverage data that
it loads from disk. If this function returns without error, the
CoverageReader will attempt to call one of the get() functions later
to retrieve the data you have loaded.
The best coverage file formats will contain some sort of mapping
for the coverage data that ties it to a module or binary that was in
the instrumented process space.
If this mapping in known, then this function should strive to store
the coverage data in the self.modules dictionary, where
self.modules[module_name] = [ coverage_addresses ]
"""
raise NotImplementedError("Coverage parser not implemented")
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
#
# if you are writing a parser for a custom coverage file format, your
# parser is *REQUIRED* to implement one of the following routines.
#
# the CoverageReader well attempt to retrieve parsed data from this class
# using one of the function below.
#
def get_addresses(self, module_name=None):
"""
Return coverage data for the named module as absolute addresses.
If no name is given / available via self.modules, the trace is assumed
to be a an ABSOLUTE ADDRESS TRACE.
These are arugably the least flexible kind of traces available, but are
still provided as an option. This fuction should return a list of
integers representing absolute coverage addresses that match the open
disassembler database...
coverage_addresses = [address, address1, address2, ...]
"""
raise NotImplementedError("Absolute addresses not supported by this log format")
def get_offsets(self, module_name):
"""
Return coverage data for the named module as relative offets.
This function should return a list of integers representing the
relative offset of an executed instruction OR basic block from the
base of the requested module (module_name).
It is *okay* to return an instruction trace, OR a basic block trace
from thin function. Lighthoue will automatically detect basic block
based traces and 'explode' them into instruction traces.
coverage_data = [offset, offset2, offset3, ...]
"""
raise NotImplementedError("Relative addresses not supported by this log format")
def get_offset_blocks(self, module_name):
"""
Return coverage data for the named module in block form.
This function should return a list of tuples representing the coverage
for the requested module (module_name). The tuples must be in the form
of (offset, size).
offset - a relative offset from the module_name base address
size - the size of the instruction, block, or sequence executed
eg, if a basic block of 24 bytes in length at kernel32.dll+0x4182 was
executed, its tuple would be (0x4182, 24).
The complete list coverage data returned by thin function should be in
the following form:
coverage_data = [(offset, size), (offset1, size1), ...]
"""
raise NotImplementedError("Block form not supported by this log format")
@@ -0,0 +1,140 @@
import os
import sys
import inspect
import logging
import traceback
from .coverage_file import CoverageFile
from lighthouse.util.python import iteritems
from lighthouse.exceptions import CoverageParsingError
logger = logging.getLogger("Lighthouse.Reader")
MODULES_DIRECTORY = os.path.join(os.path.dirname(os.path.realpath(__file__)), "parsers")
class CoverageReader(object):
"""
Middleware to automatically parse and load different coverage file formats.
This class will dynamically load and make use of coverage file parsers
that subclass from the CoverageFile abstraction and live within the
reader's 'parsers' folder.
This should allow end-users to write parsers for custom coverage file
format without having to modify any of Lighthouse's existing code (ideally)
"""
def __init__(self):
self._installed_parsers = {}
self._import_parsers()
def open(self, filepath):
"""
Open and parse a coverage file from disk.
Returns a CoverageFile on success, or raises CoverageParsingError on failure.
"""
coverage_file = None
parse_failures = {}
# attempt to parse the given coverage file with each available parser
for name, parser in iteritems(self._installed_parsers):
logger.debug("Attempting parse with '%s'" % name)
# attempt to open/parse the coverage file with the given parser
try:
coverage_file = parser(filepath)
break
# log the exceptions for each parse failure
except Exception as e:
parse_failures[name] = traceback.format_exc()
logger.debug("| Parse FAILED - " + str(e))
#logger.exception("| Parse FAILED")
#
# if *all* the coverage file parsers failed, raise an exception with
# information for each failure (for debugging)
#
if not coverage_file:
raise CoverageParsingError(filepath, parse_failures)
# successful parse
logger.debug("| Parse OKAY")
return coverage_file
def _import_parsers(self):
"""
Scan and import coverage file parsers from the 'parsers' directory.
"""
target_subclass = CoverageFile
ignored_files = ["__init__.py"]
# loop through all the files in the parsers folder
for filename in os.listdir(MODULES_DIRECTORY):
# ignore specified files, and anything not *.py
if filename in ignored_files or filename.endswith(".py") == False:
continue
# attempt to load a CoverageFile format from the current *.py file
logger.debug("| Searching file %s" % filename)
parser_file = filename[:-3]
parser_class = self._locate_subclass(parser_file, target_subclass)
if not parser_class:
logger.warning("| - No object subclassing from %s found in %s..." \
% (target_subclass.__name__, parser_file))
continue
# instantiate and add the parser to our dict of imported parsers
logger.debug("| | Found %s" % parser_class.__name__)
self._installed_parsers[parser_class.__name__] = parser_class
logger.debug("+- Done dynamically importing parsers")
# return the number of modules successfully imported
return self._installed_parsers
def _locate_subclass(self, module_file, target_subclass):
"""
Return the first matching target_subclass in module_file.
This function is used to scan a specific file (module_file) in the
Lighthouse parsers directory for class definitions that subclass from
target_subclass.
We use this to dynmically import, locate, and return objects that are
utilizing our CoverageFile abstraction.
"""
module = None
module_class = None
# attempt to import the given filepath as a python module
try:
module = __import__("lighthouse.reader.parsers." + module_file, globals(), locals(), ['object'])
except Exception as e:
logger.exception("| - Parser import failed")
return None
#
# inspect the module for any classes that subclass from target_subclass
# eg: target_subclass == CoverageFile
#
class_members = inspect.getmembers(module, inspect.isclass)
for a_class in class_members:
# does the current class definition we're inspecting subclass
# from target_subclass? if so, it is a match
try:
if a_class[1].__bases__[0] == target_subclass:
module_class = a_class[1]
break
# this class does not subclass / etc / not interesting / ignore it
except IndexError as e:
pass
# return discovered parser or None
return module_class
+533
View File
@@ -0,0 +1,533 @@
#!/usr/bin/python
import os
import re
import sys
import mmap
import struct
import collections
from ctypes import *
#
# I know people like to use this parser in their own projects, so this
# if/def makes it compatible with being imported or used outside Lighthouse
#
try:
from lighthouse.exceptions import CoverageMissingError
from lighthouse.reader.coverage_file import CoverageFile
g_lighthouse = True
except ImportError as e:
CoverageFile = object
g_lighthouse = False
#------------------------------------------------------------------------------
# DynamoRIO Drcov Log Parser
#------------------------------------------------------------------------------
class DrcovData(CoverageFile):
"""
A drcov log parser.
"""
def __init__(self, filepath=None):
self.filepath = filepath
# drcov header attributes
self.version = 0
self.flavor = None
# drcov module table
self.module_table_count = 0
self.module_table_version = 0
self.modules = {}
# drcov basic block data
self.bbs = []
self.bb_table_count = 0
self.bb_table_is_binary = True
# parse
if g_lighthouse:
super(DrcovData, self).__init__(filepath)
else:
self._parse()
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
def get_offsets(self, module_name):
"""
Return coverage data as basic block offsets for the named module.
"""
modules = self.modules.get(module_name, [])
if not modules:
return []
#
# I don't know if this should ever actually trigger, but if it does,
# it is a strange testcase to collect coverage against. It means that
# maybe the target library/module was loaded, unloaded, and reloaded?
#
# if someone ever actally triggers this, we can look into it :S
#
if self.version > 2:
assert all(module.containing_id == modules[0].id for module in modules)
# extract the unique module ids that we need to collect blocks for
mod_ids = [module.id for module in modules]
# loop through the coverage data and filter out data for the target ids
coverage_blocks = [bb.start for bb in self.bbs if bb.mod_id in mod_ids]
# return the filtered coverage blocks
return coverage_blocks
def get_offset_blocks(self, module_name):
"""
Return coverage data as basic blocks (offset, size) for the named module.
"""
modules = self.modules.get(module_name, [])
if not modules:
return []
# NOTE: see comment in get_offsets() for more info...
if self.version > 2:
assert all(module.containing_id == modules[0].id for module in modules)
# extract the unique module ids that we need to collect blocks for
mod_ids = [module.id for module in modules]
# loop through the coverage data and filter out data for the target ids
if self.version < 3:
coverage_blocks = [(bb.start, bb.size) for bb in self.bbs if bb.mod_id in mod_ids]
#
# drcov version 3 does not include the 'preferred' / sub-module base
# in the bb offset, so we must add that base offset before returning
# the block offsets to correctly normalize things
#
# it's unclear if the preferred_base for given sub-module segments
# will always be correct, so we opt to simply use the first segment
# in a given module as the base to compute the known runtime offset
#
else:
mod_bases = dict([(module.id, module.start - modules[0].start) for module in modules])
coverage_blocks = [(mod_bases[bb.mod_id] + bb.start, bb.size) for bb in self.bbs if bb.mod_id in mod_ids]
# return the filtered coverage blocks
return coverage_blocks
#--------------------------------------------------------------------------
# Parsing Routines - Top Level
#--------------------------------------------------------------------------
def _parse(self):
"""
Parse drcov coverage from the given log file.
"""
with open(self.filepath, "rb") as f:
self._parse_drcov_header(f)
self._parse_module_table(f)
self._parse_bb_table(f)
#--------------------------------------------------------------------------
# Parsing Routines - Internals
#--------------------------------------------------------------------------
def _parse_drcov_header(self, f):
"""
Parse drcov log header from filestream.
"""
# parse drcov version from log
# eg: DRCOV VERSION: 2
version_line = f.readline().decode('utf-8').strip()
self.version = int(version_line.split(":")[1])
# parse drcov flavor from log
# eg: DRCOV FLAVOR: drcov
flavor_line = f.readline().decode('utf-8').strip()
self.flavor = flavor_line.split(":")[1]
assert self.version == 2 or self.version == 3, "Only drcov versions 2 and 3 log files supported"
def _parse_module_table(self, f):
"""
Parse drcov log module table from filestream.
"""
self._parse_module_table_header(f)
self._parse_module_table_columns(f)
self._parse_module_table_modules(f)
def _parse_module_table_header(self, f):
"""
Parse drcov log module table header from filestream.
-------------------------------------------------------------------
Format used in DynamoRIO v6.1.1 through 6.2.0
eg: 'Module Table: 11'
Format used in DynamoRIO v7.0.0-RC1 (and hopefully above)
eg: 'Module Table: version X, count 11'
"""
# parse module table 'header'
# eg: Module Table: version 2, count 11
header_line = f.readline().decode('utf-8').strip()
field_name, field_data = header_line.split(": ")
#assert field_name == "Module Table"
#
# NOTE/COMPAT:
#
# DynamoRIO doesn't document their drcov log format, and it has
# changed its format at least once during its lifetime.
#
# we just have to try parsing the table header one way to determine
# if its the old (say, a 'v1') table, or the new 'v2' table.
#
try:
# seperate 'version X' and 'count Y' from each other ('v2')
version_data, count_data = field_data.split(", ")
# failure to unpack indicates this is an 'older, v1' drcov log
except ValueError:
self.module_table_count = int(field_data)
self.module_table_version = 1
return
# parse module table version out of 'version X'
data_name, version = version_data.split(" ")
#assert data_name == "version"
self.module_table_version = int(version)
if not self.module_table_version in [2, 3, 4, 5]:
raise ValueError("Unsupported (new?) drcov log format...")
# parse module count in table from 'count Y'
data_name, count = count_data.split(" ")
#assert data_name == "count"
self.module_table_count = int(count)
def _parse_module_table_columns(self, f):
"""
Parse drcov log module table columns from filestream.
-------------------------------------------------------------------
DynamoRIO v6.1.1, table version 1:
eg: (Not present)
DynamoRIO v7.0.0-RC1, table version 2:
Windows:
'Columns: id, base, end, entry, checksum, timestamp, path'
Mac/Linux:
'Columns: id, base, end, entry, path'
DynamoRIO v7.0.17594B, table version 3:
Windows:
'Columns: id, containing_id, start, end, entry, checksum, timestamp, path'
Mac/Linux:
'Columns: id, containing_id, start, end, entry, path'
DynamoRIO v7.0.17640, table version 4:
Windows:
'Columns: id, containing_id, start, end, entry, offset, checksum, timestamp, path'
Mac/Linux:
'Columns: id, containing_id, start, end, entry, offset, path'
DynamoRIO v10.0.19734, table version 5:
Windows:
'Columns: id, containing_id, start, end, entry, offset, preferred_base, checksum, timestamp, path'
Mac/Linux:
'Columns: id, containing_id, start, end, entry, offset, preferred_base, path'
"""
# NOTE/COMPAT: there is no 'Columns' line for the v1 table...
if self.module_table_version == 1:
return
# parse module table 'columns'
# eg: Columns: id, base, end, entry, checksum, timestamp, path
column_line = f.readline().decode('utf-8').strip()
field_name, field_data = column_line.split(": ")
#assert field_name == "Columns"
# seperate column names
# Windows: id, base, end, entry, checksum, timestamp, path
# Mac/Linux: id, base, end, entry, path
columns = field_data.split(", ")
def _parse_module_table_modules(self, f):
"""
Parse drcov log modules in the module table from filestream.
"""
modules = collections.defaultdict(list)
# loop through each *expected* line in the module table and parse it
for i in range(self.module_table_count):
module = DrcovModule(f.readline().decode('utf-8').strip(), self.module_table_version)
modules[module.filename].append(module)
self.modules = modules
def _parse_bb_table(self, f):
"""
Parse dcov log basic block table from filestream.
"""
self._parse_bb_table_header(f)
self._parse_bb_table_entries(f)
def _parse_bb_table_header(self, f):
"""
Parse drcov log basic block table header from filestream.
"""
# parse basic block table 'header'
# eg: BB Table: 2792 bbs
header_line = f.readline().decode('utf-8').strip()
field_name, field_data = header_line.split(": ")
#assert field_name == "BB Table"
# parse basic block count out of 'X bbs'
count_data, data_name = field_data.split(" ")
#assert data_name == "bbs"
self.bb_table_count = int(count_data)
# peek at the next few bytes to determine if this is a binary bb table.
# An ascii bb table will have the line: 'module id, start, size:'
token = b"module id"
saved_position = f.tell()
# is this an ascii table?
if f.read(len(token)) == token:
self.bb_table_is_binary = False
# nope! binary table
else:
self.bb_table_is_binary = True
# seek back to the start of the table
f.seek(saved_position)
def _parse_bb_table_entries(self, f):
"""
Parse drcov log basic block table entries from filestream.
"""
# allocate the ctypes structure array of basic blocks
self.bbs = (DrcovBasicBlock * self.bb_table_count)()
# read binary basic block entries directly into the newly allocated array
if self.bb_table_is_binary:
f.readinto(self.bbs)
# parse the plaintext basic block entries one by one
else:
self._parse_bb_table_text_entries(f)
def _parse_bb_table_text_entries(self, f):
"""
Parse drcov log basic block table text entries from filestream.
"""
table_header = f.readline().decode('utf-8').strip()
if table_header != "module id, start, size:":
raise ValueError("Invalid BB header: %r" % table_header)
pattern = re.compile(r"^module\[\s*(?P<mod>[0-9]+)\]\:\s*(?P<start>0x[0-9a-fA-F]+)\,\s*(?P<size>[0-9]+)$")
for i, bb in enumerate(self.bbs):
text_entry = f.readline().decode('utf-8').strip()
if not text_entry:
continue
match = pattern.match(text_entry)
if not match:
raise ValueError("Invalid BB entry: %r" % text_entry)
bb.start = int(match.group("start"), 16)
bb.size = int(match.group("size"), 10)
bb.mod_id = int(match.group("mod"), 10)
#------------------------------------------------------------------------------
# drcov module parser
#------------------------------------------------------------------------------
class DrcovModule(object):
"""
Parser & wrapper for module details as found in a drcov coverage log.
A 'module' in this context is a .EXE, .DLL, ELF, MachO, etc.
"""
def __init__(self, module_data, version):
self.id = 0
self.base = 0
self.end = 0
self.size = 0
self.entry = 0
self.checksum = 0
self.timestamp = 0
self.path = ""
self.filename = ""
self.containing_id = 0
# parse the module
self._parse_module(module_data, version)
@property
def start(self):
"""
Compatability alias for the module base.
DrCov table version 2 --> 3 changed this paramter name base --> start.
"""
return self.base
def _parse_module(self, module_line, version):
"""
Parse a module table entry.
"""
data = module_line.split(", ")
# NOTE/COMPAT
if version == 1:
self._parse_module_v1(data)
elif version == 2:
self._parse_module_v2(data)
elif version == 3:
self._parse_module_v3(data)
elif version == 4:
self._parse_module_v4(data)
elif version == 5:
self._parse_module_v5(data)
else:
raise ValueError("Unknown module format (v%u)" % version)
def _parse_module_v1(self, data):
"""
Parse a module table v1 entry.
"""
self.id = int(data[0])
self.size = int(data[1])
self.path = str(data[2])
self.filename = os.path.basename(self.path.replace('\\', os.sep))
def _parse_module_v2(self, data):
"""
Parse a module table v2 entry.
"""
self.id = int(data[0])
self.base = int(data[1], 16)
self.end = int(data[2], 16)
self.entry = int(data[3], 16)
if len(data) == 7: # Windows Only
self.checksum = int(data[4], 16)
self.timestamp = int(data[5], 16)
self.path = str(data[-1])
self.size = self.end-self.base
self.filename = os.path.basename(self.path.replace('\\', os.sep))
def _parse_module_v3(self, data):
"""
Parse a module table v3 entry.
"""
self.id = int(data[0])
self.containing_id = int(data[1])
self.base = int(data[2], 16)
self.end = int(data[3], 16)
self.entry = int(data[4], 16)
if len(data) > 7: # Windows Only
self.checksum = int(data[5], 16)
self.timestamp = int(data[6], 16)
self.path = str(data[-1])
self.size = self.end-self.base
self.filename = os.path.basename(self.path.replace('\\', os.sep))
def _parse_module_v4(self, data):
"""
Parse a module table v4 entry.
"""
self.id = int(data[0])
self.containing_id = int(data[1])
self.base = int(data[2], 16)
self.end = int(data[3], 16)
self.entry = int(data[4], 16)
self.offset = int(data[5], 16)
if len(data) > 8: # Windows Only
self.checksum = int(data[6], 16)
self.timestamp = int(data[7], 16)
self.path = str(data[-1])
self.size = self.end-self.base
self.filename = os.path.basename(self.path.replace('\\', os.sep))
def _parse_module_v5(self, data):
"""
Parse a module table v5 entry.
"""
self.id = int(data[0])
self.containing_id = int(data[1])
self.base = int(data[2], 16)
self.end = int(data[3], 16)
self.entry = int(data[4], 16)
self.offset = int(data[5], 16)
self.preferred_base = int(data[6], 16)
if len(data) > 8: # Windows Only
self.checksum = int(data[7], 16)
self.timestamp = int(data[8], 16)
self.path = str(data[-1])
self.size = self.end-self.base
self.filename = os.path.basename(self.path.replace('\\', os.sep))
#------------------------------------------------------------------------------
# drcov basic block parser
#------------------------------------------------------------------------------
class DrcovBasicBlock(Structure):
"""
Parser & wrapper for basic block details as found in a drcov coverage log.
NOTE:
Based off the C structure as used by drcov -
/* Data structure for the coverage info itself */
typedef struct _bb_entry_t {
uint start; /* offset of bb start from the image base */
ushort size;
ushort mod_id;
} bb_entry_t;
"""
_pack_ = 1
_fields_ = [
('start', c_uint32),
('size', c_uint16),
('mod_id', c_uint16)
]
#------------------------------------------------------------------------------
# Command Line Testing
#------------------------------------------------------------------------------
if __name__ == "__main__":
argc = len(sys.argv)
argv = sys.argv
# base usage
if argc < 2:
print("usage: {} <coverage filename>".format(os.path.basename(sys.argv[0])))
sys.exit()
# attempt file parse
x = DrcovData(argv[1])
for bb in x.bbs:
print("0x{:08x}".format(bb.start))
@@ -0,0 +1,42 @@
import os
import collections
from ..coverage_file import CoverageFile
class ModOffData(CoverageFile):
"""
A module+offset log parser.
"""
def __init__(self, filepath):
super(ModOffData, self).__init__(filepath)
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
def get_offsets(self, module_name):
return self.modules.get(module_name, {}).keys()
#--------------------------------------------------------------------------
# Parsing Routines - Top Level
#--------------------------------------------------------------------------
def _parse(self):
"""
Parse modoff coverage from the given log file.
"""
modules = collections.defaultdict(lambda: collections.defaultdict(int))
with open(self.filepath) as f:
for line in f:
trimmed = line.strip()
# skip empty lines
if not len(trimmed): continue
# comments can start with ';' or '#'
if trimmed[0] in [';', '#']: continue
module_name, bb_offset = line.rsplit("+", 1)
modules[module_name][int(bb_offset, 16)] += 1
self.modules = modules
@@ -0,0 +1,82 @@
import collections
from ..coverage_file import CoverageFile
# 'known' instruction pointer labels from Tenet traces
INSTRUCTION_POINTERS = ['EIP', 'RIP', 'PC']
class TenetData(CoverageFile):
"""
A Tenet trace log parser.
"""
def __init__(self, filepath):
self._hitmap = {}
super(TenetData, self).__init__(filepath)
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
def get_addresses(self, module_name=None):
return self._hitmap.keys()
#--------------------------------------------------------------------------
# Parsing Routines - Top Level
#--------------------------------------------------------------------------
def _parse(self):
"""
Parse absolute instruction addresses from the given Tenet trace.
"""
hitmap = collections.defaultdict(int)
with open(self.filepath) as f:
while True:
# read 128mb chunks of 'lines' from the file
lines = f.readlines(1024 * 1024 * 128)
# no more lines to process, break
if not lines:
break
# parse the instruction addresses from lines, into the hitmap
self._process_lines(lines, hitmap)
# save the hitmap if we completed parsing without crashing
self._hitmap = hitmap
def _process_lines(self, lines, hitmap):
"""
Parse instruction addresses out of the given text lines.
"""
for line in lines:
# split the line (an execution delta) into its individual entries
delta = line.split(",")
# process each item (a name=value pair) in the execution delta
for item in delta:
# split name/value pair, and normalize the name for matching
name, value = item.split("=")
name = name.upper()
# ignore entries that are not the instruction pointer
if not name in INSTRUCTION_POINTERS:
continue
# save the parsed instruction pointer address to the hitmap
address = int(value, 16)
hitmap[address] += 1
# break beacuse we don't expect two IP's on the same line
break
# continue to the next line
# ...
# done parsing this chunk of lines
return
@@ -0,0 +1,32 @@
import collections
from ..coverage_file import CoverageFile
class TraceData(CoverageFile):
"""
An instruction (or basic block) address trace log parser.
"""
def __init__(self, filepath):
self._hitmap = {}
super(TraceData, self).__init__(filepath)
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
def get_addresses(self, module_name=None):
return self._hitmap.keys()
#--------------------------------------------------------------------------
# Parsing Routines - Top Level
#--------------------------------------------------------------------------
def _parse(self):
"""
Parse absolute address coverage from the given log file.
"""
hitmap = collections.defaultdict(int)
with open(self.filepath) as f:
for line in f:
hitmap[int(line, 16)] += 1
self._hitmap = hitmap
+4
View File
@@ -0,0 +1,4 @@
from .palette import LighthousePalette
from .coverage_xref import CoverageXref
from .module_selector import ModuleSelector
from .coverage_overview import CoverageOverview
+846
View File
@@ -0,0 +1,846 @@
import logging
import weakref
from lighthouse.util import *
from lighthouse.util.qt import *
from lighthouse.util.disassembler import disassembler
logger = logging.getLogger("Lighthouse.UI.ComboBox")
#------------------------------------------------------------------------------
# Constants Definitions
#------------------------------------------------------------------------------
SEPARATOR = "seperator"
SEPARATOR_HEIGHT = 1 # pixels
ENTRY_USER = "USER"
ENTRY_SPECIAL = "SPECIAL"
COLUMN_COVERAGE_STRING = 0
COLUMN_DELETE = 1
#------------------------------------------------------------------------------
# Coverage ComboBox
#------------------------------------------------------------------------------
class CoverageComboBox(QtWidgets.QComboBox):
"""
The Coverage ComboBox UI for switching between loaded coverage.
I had to write an unnecessary amount of code to prototype the engaging
combobox experiences I was looking for.
But now that we have all the important combobox components subclassed
out (it was necessary, I promise), perhaps there are a few more
interesting and fun features we can add in the future.
"""
def __init__(self, director, parent=None):
super(CoverageComboBox, self).__init__(parent)
self.setObjectName(self.__class__.__name__)
self._director = director
# configure the widget for use
self._ui_init()
self.refresh_theme()
#--------------------------------------------------------------------------
# QComboBox Overloads
#--------------------------------------------------------------------------
def mouseReleaseEvent(self, e):
"""
Capture mouse release events on the QComboBox.
"""
# get the widget currently beneath the mouse event being handled
hovering = self.childAt(e.pos())
#
# if the hovered widget is the 'head' of the QComboBox, we assume
# the user has clicked it and should show the dropwdown 'popup'
#
# we must showPopup() ourselves because internal Qt logic for
# 'editable' comboboxes try to enter an editing mode for the field
# rather than expanding the dropdown.
#
# if you don't remember, our combobox is marked 'editable' to satisfy
# some internal Qt logic so that our 'Windows' draw style is used
#
if hovering == self.lineEdit():
self.showPopup()
e.accept()
return
# handle any other events as they normally should be
super(CoverageComboBox, self).mousePressEvent(e)
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init(self):
"""
Initialize UI elements.
"""
# initialize a monospace font to use with our widget(s)
self._font = MonospaceFont()
self._font.setPointSizeF(normalize_to_dpi(10))
self._font_metrics = QtGui.QFontMetricsF(self._font)
self.setFont(self._font)
# create the underlying model & table to power the combobox dropwdown
self.setModel(CoverageComboBoxModel(self._director, self))
self.setView(CoverageComboBoxView(self.model(), self))
#
# in the interest of maintaining a more consistent cross-platform
# style for the coverage combobox and its dropdown, we use an
# 'editable' QComboBox with the 'Windows' Qt style.
#
# since we don't actually want the QCombobox to be editable, we
# do everything we can to make it readonly / non-interfaceable.
#
self.setEditable(True)
self.lineEdit().setFont(self._font)
self.lineEdit().setReadOnly(True) # text can't be edited
self.lineEdit().setEnabled(False) # text can't be selected
#
# the combobox will pick a size based on its contents when it is first
# made visible, but we also make it is arbitrarily resizable for the
# user to change and play with at their own leisure
#
self.setSizeAdjustPolicy(QtWidgets.QComboBox.AdjustToContentsOnFirstShow)
self.setSizePolicy(QtWidgets.QSizePolicy.Ignored, QtWidgets.QSizePolicy.Ignored)
self.setMaximumHeight(int(self._font_metrics.height()*1.75))
# draw the QComboBox with a 'Windows'-esque style
self.setStyle(QtWidgets.QStyleFactory.create("Windows"))
# connect relevant signals
self._ui_init_signals()
def _ui_init_signals(self):
"""
Connect UI signals.
"""
# combobox selection was changed
self.activated.connect(self._ui_selection_changed)
# the 'X' / delete icon was clicked on a dropdown entry
self.view().clicked.connect(self._ui_clicked_delete)
# register for cues from the director
self._director.coverage_switched(self._internal_refresh)
self._director.coverage_modified(self._internal_refresh)
#--------------------------------------------------------------------------
# Signal Handlers
#--------------------------------------------------------------------------
def _ui_clicked_delete(self, index):
"""
Handle a click on the 'X' delete icon (cell) on a dropdown entry.
"""
if not index.isValid():
return
#
# the dropdown popup is actually a 2D table. column 0 is the detailed
# coverage string, where column '1' is actually the delete 'X' icon.
#
# this is a sanity check to ensure that the clicked index is actually
# the deletion column. It should not be possible for column 0 (the
# detail string) to pass through here, as that will be captured by
# the default combobox signal handlers.
#
# the reason the deletion column clicks can pass through is because
# the model has technically marked their cells as 'un-selectable'
# through the flags() overload.
#
assert index.column() == COLUMN_DELETE, "Unexpected Column (%u)" % index.column()
#
# using the table cell index that was clicked, we want to lookup the
# coverage name that this 'X' icon/cell is associated with.
#
# we retrieve the associated coverage name from the 'UserRole' field
# of the model using the clicked index. The 'UserRole' is a Qt field
# we are free to store developer/misc data in
#
coverage_name = self.model().data(index, QtCore.Qt.UserRole)
assert coverage_name
# pass the deletion request onto the director to delete said coverage
self._director.delete_coverage(coverage_name)
# refresh the dropdown (it will remove the deleted entry from the UI)
self.showPopup()
#
# I don't want there to be any entries highlighted after a deletion
# event, (it looks weird) so clear the table/dropdown highlights now
#
self.view().selectionModel().setCurrentIndex(
QtCore.QModelIndex(),
QtCore.QItemSelectionModel.ClearAndSelect
)
#
# the deletion of an entry will shift all the entries beneath it up
# by one. in this case, it is important we refresh the selection index
# to reflect the director so that it stays correct.
#
self._refresh_selection()
def _ui_selection_changed(self, row):
"""
Handle selection change of coverage combobox.
"""
# convert the combobox row index into a QModelIndex
index = self.model().index(row, 0)
# using the true index, lookup the coverage name for this selection
coverage_name = self.model().data(index, QtCore.Qt.UserRole)
# pass the user selection onto the director to change loaded coverage
self._director.select_coverage(coverage_name)
#--------------------------------------------------------------------------
# Refresh
#--------------------------------------------------------------------------
def refresh(self):
"""
Public refresh of the coverage combobox.
"""
self._internal_refresh()
@disassembler.execute_ui
def refresh_theme(self):
"""
Refresh UI facing elements to reflect the current theme.
"""
palette = self._director.palette
self.view().refresh_theme()
# configure the combobox's top row / visible dropdown
self.lineEdit().setStyleSheet(
"QLineEdit { "
" border: none;"
" padding: 0 0 0 2ex;"
" margin: 0;"
" background-color: %s;" % palette.combobox_background.name() +
"}"
)
# style the combobox dropdown
self.setStyleSheet(
"QComboBox {"
" color: %s;" % palette.combobox_text.name() +
" border: 1px solid %s;" % palette.combobox_border.name() +
" padding: 0;"
"} "
"QComboBox:hover, QComboBox:focus {"
" border: 1px solid %s;" % palette.combobox_border_focus.name() +
"}"
)
@disassembler.execute_ui
def _internal_refresh(self):
"""
Internal refresh of the coverage combobox.
"""
# refresh the comobobox internals
self.model().refresh()
self.view().refresh()
#
# now that the comobobox is fully up to date, select the item index
# that matches the active coverage as per the director
#
self._refresh_selection()
def _refresh_selection(self):
"""
Refresh the coverage combobox selection.
"""
# NOTE: we block any index change signals to stop unnecessary churn
self.blockSignals(True)
new_index = self.findData(self._director.coverage_name)
self.setCurrentIndex(new_index)
self.lineEdit().home(False)
self.blockSignals(False)
#------------------------------------------------------------------------------
# Coverage ComboBox - TableView
#------------------------------------------------------------------------------
class CoverageComboBoxView(QtWidgets.QTableView):
"""
The (internal) table view used for the Coverage ComboBox dropdown.
"""
def __init__(self, model, parent=None):
super(CoverageComboBoxView, self).__init__(parent)
self.setObjectName(self.__class__.__name__)
self._combobox = weakref.proxy(parent)
self._timer = None
# install the given data model into the table view
self.setModel(model)
# initialize UI elements
self._ui_init()
self.refresh_theme()
#--------------------------------------------------------------------------
# QTableView Overloads
#--------------------------------------------------------------------------
def showEvent(self, e):
"""
Show the QComboBox dropdown/popup.
"""
#
# the next line of code will prevent the combobox 'head' from getting
# any mouse actions now that the popup/dropdown is visible.
#
# this is pretty aggressive, but it will allow the user to 'collapse'
# the combobox dropdown while it is in an expanded state by simply
# clicking the combobox head as one can do to expand it.
#
# the reason this dirty trick is able to simulate a 'collapsing click'
# is because the user clicks 'outside' the popup/dropdown which
# automatically closes it. if the click was on the combobox head, it
# is simply ignored because we set this attribute!
#
# when the popup is closing, we undo this action in hideEvent().
#
# we have to use this workaround because we are using an 'editable' Qt
# combobox which behaves differently to clicks than a normal combobox.
#
# NOTE: we have to do this here in the tableview because the combobox's
# showPopup() and hidePopup() do not always trigger symmetrically.
#
# for example, hidePopup() was not being triggered when focus was lost
# via virutal desktop switch, and other external focus changes. this
# is really bad, because the combobox would get stuck *closed* as it
# was never re-enabled for mouse events
#
self._combobox.setAttribute(QtCore.Qt.WA_TransparentForMouseEvents)
def hideEvent(self, e):
"""
Hide the QComboBox dropdown/popup.
"""
#
# the combobox popup is now hidden / collapsed. the combobox head needs
# to be re-enlightened to direct mouse clicks (eg, to expand it). this
# undos the setAttribute action in showPopup() above.
#
# if the coverage combobox is *not* visible, the coverage window is
# probably being closed / deleted. but just in case, we should attempt
# to restore the combobox's ability to accept clicks before bailing.
#
# this fixes a bug / Qt warning first printed in IDA 7.4 where 'self'
# (the comobobox) would be deleted by the time the 100ms timer in the
# 'normal' case fires below
#
if not self._combobox.isVisible():
self._combobox.setAttribute(QtCore.Qt.WA_TransparentForMouseEvents, False)
return
#
# in the more normal case, the comobobox is simply being collapsed
# by the user clicking it, or clicking away from it.
#
# we use a short timer of 100ms to ensure the 'hiding' of the dropdown
# and its associated click are processed first. aftwards, it is safe to
# begin accepting clicks again.
#
self._timer = QtCore.QTimer.singleShot(100, self.__hidePopup_setattr)
def __hidePopup_setattr(self):
self._combobox.setAttribute(QtCore.Qt.WA_TransparentForMouseEvents, False)
def leaveEvent(self, e):
"""
Overload the mouse leave event.
"""
#
# this code mitigates a bug (feature?) where the last hovered index
# of the table view was retaining its MouseOver flag internally. This
# was keeping my 'X' icons highlighted if the mouse cursor left the
# table while touching one of these cells last.
#
# we basically send a fake 'Hover Event' to the table viewport at an
# invalid position so table clears any remaining hover flags.
#
event = QtGui.QHoverEvent(QtCore.QEvent.HoverLeave, QtCore.QPoint(-1,-1), QtCore.QPoint(-1,-1))
QtWidgets.QApplication.sendEvent(self.viewport(), event)
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init(self):
"""
Initialize UI elements.
"""
# initialize a monospace font to use with our widget(s)
self._font = MonospaceFont()
self._font.setPointSizeF(normalize_to_dpi(10))
self._font_metrics = QtGui.QFontMetricsF(self._font)
self.setFont(self._font)
# hide dropdown table headers, and default grid
self.horizontalHeader().setVisible(False)
self.verticalHeader().setVisible(False)
self.setShowGrid(False)
# let Qt automatically elide (...) long row text (coverage names)
self.resizeColumnToContents(0)
self.setTextElideMode(QtCore.Qt.ElideRight)
self.setWordWrap(False)
# more code-friendly, readable aliases
vh = self.verticalHeader()
hh = self.horizontalHeader()
#
# - set the coverage name column to be stretchy and as tall as the text
# - make the 'X' icon column fixed width
#
hh.setSectionResizeMode(0, QtWidgets.QHeaderView.Stretch)
vh.setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents)
vh.setMinimumSectionSize(0)
# get the column width hint from the model for the 'X' delete column
icon_column_width = self.model().headerData(
COLUMN_DELETE,
QtCore.Qt.Horizontal,
QtCore.Qt.SizeHintRole
)
# set the 'X' delete icon column width to a fixed size based on the hint
hh.setMinimumSectionSize(icon_column_width)
hh.resizeSection(COLUMN_DELETE, icon_column_width)
hh.setSectionResizeMode(1, QtWidgets.QHeaderView.ResizeToContents)
# install a delegate to do some custom painting against the combobox
self.setItemDelegate(ComboBoxDelegate(self))
#--------------------------------------------------------------------------
# Refresh
#--------------------------------------------------------------------------
def refresh(self):
"""
Refresh the coverage combobox list order.
"""
model = self.model() # alias for readability
# merge the 'special' entries up until a seperator is found
for row in xrange(model.rowCount()):
#
# if this row is not a user defined entry, we want to merge ('span')
# its cells so there is no 'X' delete button column shown for it.
#
# this should apply to special rows such as the 'Hot Shell',
# 'Aggregate', or the 'separator' indexes
#
if not model.data(model.index(row, 1), QtCore.Qt.DecorationRole):
self.setSpan(row, 0, 1, model.columnCount())
# this is a user entry, ensure there is no span present (clear it)
else:
self.setSpan(row, 0, 0, model.columnCount())
@disassembler.execute_ui
def refresh_theme(self):
"""
Refresh UI facing elements to reflect the current theme.
"""
palette = self.model()._director.palette
self.setStyleSheet(
"QTableView {"
" background-color: %s;" % palette.combobox_background.name() +
" color: %s;" % palette.combobox_text.name() +
" margin: 0; outline: none;"
" border: 1px solid %s; " % palette.shell_border.name() +
"} "
"QTableView::item { " +
" padding: 0.5ex; border: 0; "
"} "
"QTableView::item:focus { " +
" background-color: %s; " % palette.combobox_selection_background.name() +
" color: %s; " % palette.combobox_selection_text.name() +
"} "
)
#------------------------------------------------------------------------------
# Coverage ComboBox - TableModel
#------------------------------------------------------------------------------
class CoverageComboBoxModel(QtCore.QAbstractTableModel):
"""
The (internal) table model used for the Coverage ComboBox dropdown.
"""
def __init__(self, director, parent=None):
super(CoverageComboBoxModel, self).__init__(parent)
self.setObjectName(self.__class__.__name__)
self._director = director
# our internal model
self._entries = []
self._seperator_index = 0
# initialize a monospace font to use with our widget(s)
self._font = MonospaceFont()
self._font.setPointSizeF(normalize_to_dpi(10))
self._font_metrics = QtGui.QFontMetricsF(self._font)
# load the raw 'X' delete icon from disk
delete_icon = QtGui.QPixmap(plugin_resource("icons/delete_coverage.png"))
# compute the appropriate size for the deletion icon
icon_height = int(self._font_metrics.height()*0.75)
icon_width = icon_height
# scale the icon as appropriate (very likely scaling it down)
self._delete_icon = delete_icon.scaled(
icon_width,
icon_height,
QtCore.Qt.KeepAspectRatio,
QtCore.Qt.SmoothTransformation
)
# register for cues from the director
self._director.coverage_created(self.refresh)
self._director.coverage_deleted(self.refresh)
#--------------------------------------------------------------------------
# QAbstractTableModel Overloads
#--------------------------------------------------------------------------
def rowCount(self, parent=QtCore.QModelIndex()):
"""
The number of dropdown rows.
"""
return len(self._entries)
def columnCount(self, parent=QtCore.QModelIndex()):
"""
The nubmer of dropdown columns.
| column[0] | column[1]
+---------------------------+--------------------
| detailed coverage string1 | 'X' (delete icon)
| detailed coverage string2 | 'X' (delete icon)
...
"""
return 2
def headerData(self, section, orientation, role=QtCore.Qt.DisplayRole):
"""
Define the properties of the the table rows & columns.
"""
# table row property request
if orientation == QtCore.Qt.Vertical:
# row height size hint request
if role == QtCore.Qt.SizeHintRole:
# the separator 'row' has a special, 'thinner' row size
if section == self._seperator_index:
return SEPARATOR_HEIGHT
# all other rows should be at least as tall as their text
else:
return self._font_metrics.height()
# table column property request
elif orientation == QtCore.Qt.Horizontal:
# column width size hint request
if role == QtCore.Qt.SizeHintRole:
#
# the column holding the 'X' delete icon should be small
# and fixed width, therefore we are explicit in specifying
# our own size hint for it.
#
# note that the icon size is used to hint the column width,
# but multiplied by two. this is because we want the 'X'
# icon to float and have some padding in its column.
#
if section == COLUMN_DELETE:
return self._delete_icon.size().width() * 2
# unhandled request, nothing to do
return None
def data(self, index, role=QtCore.Qt.DisplayRole):
"""
Define how Qt should access the underlying model data.
"""
# sanity check the given index
if not index.isValid() or \
not (index.row() < self.rowCount()) or \
not (index.column() < self.columnCount()):
return None
# font format request
if role == QtCore.Qt.FontRole:
return self._font
# text alignment request
elif role == QtCore.Qt.TextAlignmentRole:
return QtCore.Qt.AlignVCenter | QtCore.Qt.AlignLeft
# combobox header, padded with " " to account for dropdown arrow overlap
elif role == QtCore.Qt.EditRole:
if index.column() == COLUMN_COVERAGE_STRING and index.row() != self._seperator_index:
return self._director.get_coverage_string(self._entries[index.row()]) + " "
# data display request
elif role == QtCore.Qt.DisplayRole:
if index.column() == COLUMN_COVERAGE_STRING and index.row() != self._seperator_index:
return self._director.get_coverage_string(self._entries[index.row()])
# tooltip
elif role == QtCore.Qt.ToolTipRole:
if index.column() == COLUMN_COVERAGE_STRING and index.row() != self._seperator_index:
coverage = self._director.get_coverage(self._entries[index.row()])
return coverage.filepath if coverage.filepath else ""
elif index.column() == COLUMN_DELETE:
return "Delete loaded coverage"
# icon display request
elif role == QtCore.Qt.DecorationRole:
# the icon request is for the 'X' column
if index.column() == COLUMN_DELETE:
#
# if the coverage entry is below the separator, it is a user
# loaded coverage and should always be deletable
#
if index.row() > self._seperator_index:
return self._delete_icon
#
# as a special case, we allow the aggregate to have a clear
# icon, which will clear all user loaded coverages
#
elif self._entries[index.row()] == "Aggregate":
return self._delete_icon
# entry type request
elif role == QtCore.Qt.AccessibleDescriptionRole:
#
# if the entry is ABOVE the separator index, it's a 'special'
# entry, eg 'Hot Shell', 'New Composition', 'Aggregate'
#
if index.row() < self._seperator_index:
return ENTRY_SPECIAL
#
# the entry IS the separator index
#
elif index.row() == self._seperator_index:
return SEPARATOR
#
# if the entry is BELOW the separator index, it's a 'user'
# entry, eg loaded coverage files, compositions, etc
#
else:
return ENTRY_USER
# entry coverage_name request
elif role == QtCore.Qt.UserRole:
return self._entries[index.row()]
# unhandeled request, nothing to do
return None
def flags(self, index):
"""
Item flags for the given entry index.
"""
# the 'X' column is ENABLED, but not technically selectable
if index.column() == COLUMN_DELETE:
return QtCore.Qt.ItemIsEnabled
# the separator should not be interactive in *any* way
if index.row() == self._seperator_index:
return QtCore.Qt.NoItemFlags
# unhandeled request, pass through
return super(CoverageComboBoxModel, self).flags(index)
#--------------------------------------------------------------------------
# Refresh
#--------------------------------------------------------------------------
def refresh(self):
"""
Refresh the coverage combobox model data.
"""
# extract all the names from the director with a shorthand symbol
with_shorthand = []
for name in self._director.coverage_names:
if self._director.get_shorthand(name):
with_shorthand.append(name)
# re-populate the model entries
self._entries = []
self._entries += list(self._director.special_names)
self._entries += [SEPARATOR]
self._entries += with_shorthand
# save the index of the separator for easy reference
self._seperator_index = self._entries.index(SEPARATOR)
# notify any listeners that the model layout may have changed
self.layoutChanged.emit()
#------------------------------------------------------------------------------
# Coverage ComboBox - Painting Delegate
#------------------------------------------------------------------------------
class ComboBoxDelegate(QtWidgets.QStyledItemDelegate):
"""
Coverage ComboBox Painting Delegate
Painting delegates can be used to augment the painting of a given
widget or its items. In this case, we use it to customize the
dropdown table in the Coverage ComboBox a bit more to our liking.
"""
def __init__(self, parent):
super(ComboBoxDelegate, self).__init__(parent)
# painting property definitions
self._grid_color = parent.model()._director.palette.shell_border
def sizeHint(self, option, index):
"""
Augmented entry sizeHint.
"""
if index.data(QtCore.Qt.AccessibleDescriptionRole) == SEPARATOR:
return QtCore.QSize(1, SEPARATOR_HEIGHT)
return super(ComboBoxDelegate, self).sizeHint(option, index)
def paint(self, painter, option, index):
"""
Augmented entry painting.
"""
# custom paint the 'grid line' beneath each coverage entry
if index.data(QtCore.Qt.AccessibleDescriptionRole) == ENTRY_USER:
painter.save()
painter.setPen(self._grid_color)
final_entry = (index.sibling(index.row()+1, 0).row() == -1)
# draw the grid line beneath the current row (a coverage entry)
tweak = QtCore.QPoint(0, 1) # 1px tweak provides better spacing
if not final_entry:
painter.drawLine(
option.rect.bottomLeft() + tweak,
option.rect.bottomRight() + tweak
)
#
# now we will re-draw the grid line *above* the current entry,
# fixing a minor graphical bug where grid lines could disappear
# after hovering over a row / entry
#
previous = index.sibling(index.row()-1, 0)
painter.drawLine(
option.rect.topLeft(),
option.rect.topRight()
)
painter.restore()
# custom paint the 'X' icon where applicable
if index.data(QtCore.Qt.DecorationRole):
# get the icon data from the model
pixmap = index.data(QtCore.Qt.DecorationRole)
# center the draw rect in the middle of the 'X' column cell
destination_rect = pixmap.rect()
destination_rect.moveCenter(option.rect.center())
# augment the icon pixmap to be grayed out (disabled) or colored
# based on the mouse hover status of this index
if not (option.state & QtWidgets.QStyle.State_MouseOver):
pixmap = QtWidgets.QApplication.style().generatedIconPixmap(
QtGui.QIcon.Disabled,
pixmap,
QtWidgets.QStyleOption()
)
# draw the icon to the column
painter.drawPixmap(destination_rect, pixmap)
return
# custom paint the separator entry between special & normal coverage
if index.data(QtCore.Qt.AccessibleDescriptionRole) == SEPARATOR:
painter.save()
painter.setPen(self._grid_color)
painter.drawRect(
option.rect
)
painter.restore()
# nothing else to paint for the separator entry
return
# pass through to the standard painting
super(ComboBoxDelegate, self).paint(painter, option, index)
+387
View File
@@ -0,0 +1,387 @@
import os
import logging
import weakref
from lighthouse.util.qt import *
from lighthouse.util.misc import plugin_resource
from lighthouse.util.disassembler import disassembler
from lighthouse.composer import ComposingShell
from lighthouse.ui.coverage_table import CoverageTableView, CoverageTableModel, CoverageTableController
from lighthouse.ui.coverage_combobox import CoverageComboBox
from lighthouse.ui.coverage_settings import TableSettingsMenu
logger = logging.getLogger("Lighthouse.UI.Overview")
#------------------------------------------------------------------------------
# Coverage Overview
#------------------------------------------------------------------------------
class CoverageOverview(object):
"""
The Coverage Overview Widget.
"""
def __init__(self, lctx, widget):
self.lctx = lctx
self.widget = widget
self.director = self.lctx.director
self.lctx.coverage_overview = self
self.initialized = False
self._refreshed_callback = None
# see the EventProxy class below for more details
self._events = EventProxy(self)
self.widget.installEventFilter(self._events)
# plugin_resource(os.path.join("icons", "overview.png"))
# initialize the plugin UI
self._ui_init()
# refresh the data UI such that it reflects the most recent data
self.refresh()
# register for cues from the director
self._refreshed_callback = self.director.refreshed(self.refresh)
#--------------------------------------------------------------------------
# Pseudo Widget Functions
#--------------------------------------------------------------------------
@property
def name(self):
if not self.widget:
return "Coverage Overview"
return self.widget.name
@property
def visible(self):
if not self.widget:
return False
return self.widget.visible
def terminate(self):
"""
The CoverageOverview is being hidden / deleted.
"""
if self.widget is None:
return
if self._refreshed_callback:
self.director.unregister_refreshed(self._refreshed_callback)
self._refreshed_callback = None
if self.lctx.coverage_overview is self:
self.lctx.coverage_overview = None
self.initialized = False
self._combobox = None
self._shell = None
self._toolbar = None
self._table_view = None
self._table_controller = None
self._table_model = None
self._settings_button = None
self._settings_menu = None
self._shell_elements = None
self._events = None
self.widget = None
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init(self):
"""
Initialize UI elements.
"""
# initialize our ui elements
self._ui_init_table()
self._ui_init_toolbar()
self._ui_init_signals()
# layout the populated ui just before showing it
self._ui_layout()
def _ui_init_table(self):
"""
Initialize the coverage table.
"""
self._table_model = CoverageTableModel(self.lctx, self.widget)
self._table_controller = CoverageTableController(self.lctx, self._table_model)
self._table_view = CoverageTableView(self._table_controller, self._table_model, self.widget)
def _ui_init_toolbar(self):
"""
Initialize the coverage toolbar.
"""
# initialize child elements to go on the toolbar
self._ui_init_toolbar_elements()
self._ui_init_settings()
#
# create the 'toolbar', and customize its style. specifically, we are
# interested in tweaking the separator and padding between elements.
#
self._toolbar = QtWidgets.QToolBar()
self._toolbar.setStyle(QtWidgets.QStyleFactory.create("Windows"))
self._toolbar.setStyleSheet('QToolBar{padding:0;margin:0;}')
# populate the toolbar with all our subordinates
self._toolbar.addWidget(self._shell_elements)
self._toolbar.addWidget(self._settings_button)
def _ui_init_toolbar_elements(self):
"""
Initialize the coverage toolbar UI elements.
"""
# the composing shell
self._shell = ComposingShell(
self.lctx,
weakref.proxy(self._table_model),
weakref.proxy(self._table_view)
)
# the coverage combobox
self._combobox = CoverageComboBox(self.director)
# the splitter to make the shell / combobox resizable
self._shell_elements = QtWidgets.QSplitter(QtCore.Qt.Horizontal)
self._shell_elements.setStyleSheet(
"""
QSplitter
{
border: none;
}
QSplitter::handle
{
background-color: #909090;
width: 2px;
height: 2px;
margin: 0 0.5em 0 0.5em
}
QSplitter::handle:horizontal:hover
{
background-color: #3399FF;
}
""")
# add the child items we wish to put the 'splitter' between
# [ composing shell ] [SPLITTER] [ combobox ]
self._shell_elements.addWidget(self._shell)
self._shell_elements.addWidget(self._combobox)
# make the splitter responsive (animate) when hovered
self._shell_elements.handle(1).setAttribute(QtCore.Qt.WA_Hover)
# give the shell expansion preference over the combobox
self._shell_elements.setStretchFactor(0, 1)
def _ui_init_settings(self):
"""
Initialize the overview settings popup.
"""
# settings button
self._settings_button = QtWidgets.QToolButton()
self._settings_button.setIcon(get_qt_icon("SP_DialogResetButton"))
self._settings_button.setStyleSheet("QToolButton::menu-indicator{image: none;}")
# settings menu
self._settings_menu = TableSettingsMenu(self.widget)
def _ui_init_signals(self):
"""
Connect UI signals.
"""
self._settings_menu.connect_signals(self._table_controller, self.lctx)
self._settings_button.clicked.connect(self._ui_show_settings)
def _ui_layout(self):
"""
Layout the major UI elements of the widget.
"""
# layout the major elements of our widget
layout = QtWidgets.QGridLayout()
layout.setSpacing(int(get_dpi_scale()*5))
layout.addWidget(self._table_view)
layout.addWidget(self._toolbar)
# apply the layout to the containing form
self.widget.setLayout(layout)
#--------------------------------------------------------------------------
# Signal Handlers
#--------------------------------------------------------------------------
def _ui_show_settings(self):
"""
Handle a click of the settings button.
"""
delta = QtCore.QPoint(
-1*self._settings_menu.sizeHint().width(),
-1*self._settings_menu.sizeHint().height()
)
center = QtCore.QPoint(
int(self._settings_button.sizeHint().width()/2),
int(self._settings_button.sizeHint().height()/2)
)
where = self._settings_button.mapToGlobal(center+delta)
self._settings_menu.popup(where)
#--------------------------------------------------------------------------
# Refresh
#--------------------------------------------------------------------------
@disassembler.execute_ui
def refresh(self):
"""
Refresh the Coverage Overview.
"""
if not (self._table_model and self._shell and self._combobox):
return
self._table_model.refresh()
self._shell.refresh()
self._combobox.refresh()
@disassembler.execute_ui
def refresh_theme(self):
"""
Update visual elements based on theme change.
"""
if not (self._table_view and self._table_model and self._shell and self._combobox):
return
self._table_view.refresh_theme()
self._table_model.refresh_theme()
self._shell.refresh_theme()
self._combobox.refresh_theme()
#------------------------------------------------------------------------------
# Qt Event Filter
#------------------------------------------------------------------------------
debugger_docked = False
class EventProxy(QtCore.QObject):
#
# NOTE/COMPAT: QtCore.QEvent.Destroy not in IDA7? Just gonna ship our own...
# - https://doc.qt.io/qt-5/qevent.html#Type-enum
#
EventShow = 17
EventDestroy = 16
EventLayoutRequest = 76
EventUpdateLater = 78
def __init__(self, target):
super(EventProxy, self).__init__()
self._target = weakref.proxy(target)
self._first_hit = True
def eventFilter(self, source, event):
#
# hook the destroy event of the coverage overview widget so that we can
# cleanup after ourselves in the interest of stability
#
if int(event.type()) == self.EventDestroy:
source.removeEventFilter(self)
#
# XXX/V35: This is pretty hacky annoying stuff, but the lifetime
# of the CoverageOverview widget is managed internally by binja
# and gets deleted/cleaned up *after* a database is closed.
#
# it's best we just unload the lighthouse context in binja after
# the UI widgets have been destroyed (which aligns with IDA)
#
if disassembler.NAME == "BINJA":
lctx = self._target.lctx
core = lctx.core
core.binja_close_context(lctx.dctx)
# cleanup the UI / qt references for the CoverageOverview elements
self._target.terminate()
#
# this seems to be 'roughly' the last event triggered after the widget
# is done initializing in both IDA and Binja, but prior to the first
# user-triggered 'show' events.
#
# this is mostly to account for the fact that binja 'shows' the widget
# when it is initially created (outside of our control). this was
# causing lighthouse to automatically cache database metadata when
# every database was opened ...
#
elif int(event.type()) == self.EventLayoutRequest:
self._target.initialized = True
#
# this is used to hook a little bit after the 'show' event of the
# coverage overview. this is the most universal signal that the
# user is *actually* trying to use lighthouse in a meaningful way...
#
# we will use this moment first to check if they skipped straight to
# 'go' and opened the coverage overview without the metadata cache
# getting built.
#
# this case should only happen if the user does 'Show Coverage
# Overview' from the binja-controlled Window menu entry...
#
elif int(event.type()) == self.EventUpdateLater:
if self._target.visible and self._first_hit:
self._first_hit = False
if disassembler.NAME == "BINJA":
self._target.lctx.start()
if not self._target.director.metadata.cached:
self._target.director.refresh()
#
# this is an unknown event, but it seems to fire when the widget is
# being saved/restored by a QMainWidget (in IDA). we use this to try
# and ensure the Coverage Overview stays docked when flipping between
# Reversing and Debugging states in IDA.
#
# See issue #16 on github for more information.
#
elif int(event.type()) == 2002 and disassembler.NAME == "IDA":
import idaapi
#
# if the general registers IDA View exists, we make the assumption
# that the user has probably started debugging.
#
debug_mode = bool(idaapi.find_widget("General registers"))
#
# if this is the first time the user has started debugging, dock
# the coverage overview in the debug QMainWidget workspace. its
# dock status / position should persist future debugger launches.
#
global debugger_docked
if debug_mode and not debugger_docked:
idaapi.set_dock_pos(self._target._title, "Structures", idaapi.DP_TAB)
debugger_docked = True
return False
+101
View File
@@ -0,0 +1,101 @@
import logging
from lighthouse.util.qt import *
from lighthouse.util.disassembler import disassembler
logger = logging.getLogger("Lighthouse.UI.Settings")
class TableSettingsMenu(QtWidgets.QMenu):
"""
A quick-access settings menu for Lighthouse.
"""
def __init__(self, parent=None):
super(TableSettingsMenu, self).__init__(parent)
self._visible_action = None
self._ui_init_actions()
self.setToolTipsVisible(True)
#--------------------------------------------------------------------------
# QMenu Overloads
#--------------------------------------------------------------------------
def event(self, event):
"""
Hook the QMenu event stream.
"""
action = self.activeAction()
# swallow clicks to checkbox/radiobutton actions to keep qmenu open
if event.type() == QtCore.QEvent.MouseButtonRelease:
if action and action.isEnabled() and action.isCheckable():
action.trigger()
event.accept()
return True
# handle any other events as wee normally should
return super(TableSettingsMenu, self).event(event)
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init_actions(self):
"""
Initialize the menu actions.
"""
# lighthouse colors
self._action_change_theme = QtWidgets.QAction("Change theme", None)
self._action_change_theme.setToolTip("Lighthouse color & theme customization")
self.addAction(self._action_change_theme)
self.addSeparator()
# painting
self._action_force_clear = QtWidgets.QAction("Force clear paint (slow!)", None)
self._action_force_clear.setToolTip("Attempt to forcefully clear stuck paint from the database")
self.addAction(self._action_force_clear)
self._action_disable_paint = QtWidgets.QAction("Disable painting", None)
self._action_disable_paint.setCheckable(True)
self._action_disable_paint.setToolTip("Disable the coverage painting subsystem")
self.addAction(self._action_disable_paint)
self.addSeparator()
# table actions
self._action_refresh_metadata = QtWidgets.QAction("Rebuild coverage mappings", None)
self._action_refresh_metadata.setToolTip("Refresh the database metadata and coverage mapping")
self.addAction(self._action_refresh_metadata)
self._action_export_html = QtWidgets.QAction("Generate HTML report", None)
self._action_export_html.setToolTip("Export the coverage table to HTML")
self.addAction(self._action_export_html)
self._action_hide_zero = QtWidgets.QAction("Hide 0% coverage", None)
self._action_hide_zero.setToolTip("Hide table entries with no coverage data")
self._action_hide_zero.setCheckable(True)
self.addAction(self._action_hide_zero)
def connect_signals(self, controller, lctx):
"""
Connect UI signals.
"""
self._action_change_theme.triggered.connect(lctx.core.palette.interactive_change_theme)
self._action_refresh_metadata.triggered.connect(lctx.director.refresh)
self._action_hide_zero.triggered[bool].connect(controller._model.filter_zero_coverage)
self._action_disable_paint.triggered[bool].connect(lambda x: lctx.painter.set_enabled(not x))
self._action_force_clear.triggered.connect(lctx.painter.force_clear)
self._action_export_html.triggered.connect(controller.export_to_html)
lctx.painter.status_changed(self._ui_painter_changed_status)
#--------------------------------------------------------------------------
# Signal Handlers
#--------------------------------------------------------------------------
@disassembler.execute_ui
def _ui_painter_changed_status(self, painter_enabled):
"""
Handle an event from the painter being enabled/disabled.
"""
self._action_disable_paint.setChecked(not painter_enabled)
File diff suppressed because it is too large Load Diff
+176
View File
@@ -0,0 +1,176 @@
import os
import logging
from lighthouse.util import lmsg
from lighthouse.util.qt import *
from lighthouse.util.misc import human_timestamp
from lighthouse.util.python import *
logger = logging.getLogger("Lighthouse.UI.Xref")
#------------------------------------------------------------------------------
# Coverage Xref Dialog
#------------------------------------------------------------------------------
class CoverageXref(QtWidgets.QDialog):
"""
A Qt Dialog to list other coverage sets that contain a given address.
This class makes up a rudimentary xref dialog. It does not follow Qt
'best practices' because it does not need to be super flashy, nor does
it demand much facetime.
"""
def __init__(self, director, address):
super(CoverageXref, self).__init__()
self._director = director
# dialog attributes
self.address = address
self.selected_coverage = None
self.selected_filepath = None
# configure the widget for use
self._ui_init()
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init(self):
"""
Initialize UI elements.
"""
self.setWindowTitle("Coverage Xrefs to 0x%X" % self.address)
self.setWindowFlags(self.windowFlags() & ~QtCore.Qt.WindowContextHelpButtonHint)
self.setModal(True)
self._font = self.font()
self._font.setPointSizeF(normalize_to_dpi(10))
self._font_metrics = QtGui.QFontMetricsF(self._font)
# initialize coverage xref table
self._ui_init_table()
self._populate_table()
# layout the populated UI just before showing it
self._ui_layout()
def _ui_init_table(self):
"""
Initialize the coverage xref table UI elements.
"""
self._table = QtWidgets.QTableWidget()
self._table.verticalHeader().setVisible(False)
self._table.setHorizontalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel)
self._table.horizontalHeader().setFont(self._font)
self._table.setFont(self._font)
self._table.setWordWrap(False)
# symbol, cov %, name, time
self._table.setColumnCount(4)
self._table.setHorizontalHeaderLabels(["Sym", "Cov %", "Coverage Name", "Timestamp"])
self._table.setColumnWidth(0, 45)
self._table.setColumnWidth(1, 55)
self._table.setColumnWidth(2, 400)
self._table.setColumnWidth(3, 100)
# left align text in column headers
for i in range(4):
self._table.horizontalHeaderItem(i).setTextAlignment(QtCore.Qt.AlignLeft)
# disable bolding of column headers when selected
self._table.horizontalHeader().setHighlightSections(False)
# stretch the filename field, as it is the most important
self._table.horizontalHeader().setSectionResizeMode(2, QtWidgets.QHeaderView.Stretch)
# make table read only, select a full row by default
self._table.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers)
self._table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
# catch double click events on table rows
self._table.cellDoubleClicked.connect(self._ui_cell_double_click)
def _populate_table(self):
"""
Populate the xref table with data from the coverage director.
"""
cov_xrefs = self._director.get_address_coverage(self.address)
file_xrefs = self._director.get_address_file(self.address)
# dedupe
for coverage in cov_xrefs:
if coverage.filepath in file_xrefs:
file_xrefs.remove(coverage.filepath)
# populate table with coverage details
self._table.setSortingEnabled(False)
self._table.setRowCount(len(cov_xrefs) + len(file_xrefs))
# coverage objects
for i, coverage in enumerate(cov_xrefs, 0):
self._table.setItem(i, 0, QtWidgets.QTableWidgetItem(self._director.get_shorthand(coverage.name)))
self._table.setItem(i, 1, QtWidgets.QTableWidgetItem("%5.2f" % (coverage.instruction_percent*100)))
name_entry = QtWidgets.QTableWidgetItem(coverage.name)
name_entry.setToolTip(coverage.filepath)
self._table.setItem(i, 2, name_entry)
date_entry = QtWidgets.QTableWidgetItem()
date_entry.setData(QtCore.Qt.DisplayRole, QtCore.QDateTime.fromMSecsSinceEpoch(int(coverage.timestamp*1000)))
self._table.setItem(i, 3, QtWidgets.QTableWidgetItem(date_entry))
# filepaths
for i, filepath in enumerate(file_xrefs, len(cov_xrefs)):
# try to read timestamp of the file on disk (if it exists)
try:
timestamp = os.path.getmtime(filepath)
except (OSError, TypeError):
timestamp = 0
# populate table entry
self._table.setItem(i, 0, QtWidgets.QTableWidgetItem("-"))
self._table.setItem(i, 1, QtWidgets.QTableWidgetItem("-"))
name_entry = QtWidgets.QTableWidgetItem(os.path.basename(filepath))
name_entry.setToolTip(filepath)
self._table.setItem(i, 2, name_entry)
date_entry = QtWidgets.QTableWidgetItem()
date_entry.setData(QtCore.Qt.DisplayRole, QtCore.QDateTime.fromMSecsSinceEpoch(int(timestamp*1000)))
self._table.setItem(i, 3, date_entry)
self._table.resizeColumnsToContents()
self._table.resizeRowsToContents()
self._table.setSortingEnabled(True)
def _ui_layout(self):
"""
Layout the major UI elements of the widget.
"""
layout = QtWidgets.QVBoxLayout()
# layout child widgets
layout.addWidget(self._table)
# scale widget dimensions based on DPI
height = int(get_dpi_scale() * 250)
width = int(get_dpi_scale() * 600)
self.setMinimumHeight(height)
self.setMinimumWidth(width)
# apply the widget layout
self.setLayout(layout)
#--------------------------------------------------------------------------
# Signal Handlers
#--------------------------------------------------------------------------
def _ui_cell_double_click(self, row, column):
"""
A cell/row has been double clicked in the xref table.
"""
if self._table.item(row, 0).text() == "-":
self.selected_filepath = self._table.item(row, 2).toolTip()
else:
self.selected_coverage = self._table.item(row, 2).text()
self.accept()
+168
View File
@@ -0,0 +1,168 @@
import os
import logging
from lighthouse.util import lmsg
from lighthouse.util.qt import *
from lighthouse.util.misc import human_timestamp
from lighthouse.util.python import *
logger = logging.getLogger("Lighthouse.UI.ModuleSelector")
#------------------------------------------------------------------------------
# Coverage Xref Dialog
#------------------------------------------------------------------------------
class ModuleSelector(QtWidgets.QDialog):
"""
A Qt Dialog to list all the coverage modules in a coverage file.
This class makes up a rudimentary selector dialog. It does not follow Qt
'best practices' because it does not need to be super flashy, nor does
it demand much facetime.
"""
def __init__(self, target_name, module_names, coverage_file):
super(ModuleSelector, self).__init__()
self._target_name = target_name
self._module_names = module_names
self._coverage_file = os.path.basename(coverage_file)
# dialog attributes
self.selected_name = None
# configure the widget for use
self._ui_init()
@property
def remember_alias(self):
return self._checkbox_remember.isChecked()
@property
def ignore_missing(self):
return self._checkbox_ignore_missing.isChecked()
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init(self):
"""
Initialize UI elements.
"""
self.setWindowTitle("Select module matching this database")
self.setWindowFlags(self.windowFlags() & ~QtCore.Qt.WindowContextHelpButtonHint)
self.setModal(True)
self._font = self.font()
self._font.setPointSizeF(normalize_to_dpi(10))
self._font_metrics = QtGui.QFontMetricsF(self._font)
# initialize module selector table
self._ui_init_header()
self._ui_init_table()
self._populate_table()
# layout the populated UI just before showing it
self._ui_layout()
def _ui_init_header(self):
"""
Initialize the module selector header UI elements.
"""
description_text = \
"Lighthouse could not automatically identify the target module in the given coverage file:<br />" \
"<br />" \
"-- <b>Target:</b> {0}<br />" \
"-- <b>Coverage File:</b> {1}<br />" \
"<br />" \
"Please double click the name of the module that matches this database, or close this dialog<br />" \
"if you do not see your binary listed in the table below...".format(self._target_name, self._coverage_file)
self._label_description = QtWidgets.QLabel(description_text)
self._label_description.setTextFormat(QtCore.Qt.RichText)
self._label_description.setFont(self._font)
#self._label_description.setWordWrap(True)
# a checkbox to save the user selected alias to the database
self._checkbox_remember = QtWidgets.QCheckBox("Remember target module alias for this session")
self._checkbox_remember.setFont(self._font)
# a checkbox to ignore future 'missing coverage' / select module warnings
self._checkbox_ignore_missing = QtWidgets.QCheckBox("Suppress this dialog for the remaining coverage files")
self._checkbox_ignore_missing.setFont(self._font)
def _ui_init_table(self):
"""
Initialize the module selector table UI elements.
"""
self._table = QtWidgets.QTableWidget()
self._table.verticalHeader().setVisible(False)
self._table.setHorizontalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel)
self._table.horizontalHeader().setFont(self._font)
self._table.setFont(self._font)
# Create a simple table / list
self._table.setColumnCount(1)
self._table.setHorizontalHeaderLabels(["Module Name"])
# left align text in column headers
self._table.horizontalHeaderItem(0).setTextAlignment(QtCore.Qt.AlignLeft)
# disable bolding of column headers when selected
self._table.horizontalHeader().setHighlightSections(False)
# stretch the last column of the table (aesthetics)
self._table.horizontalHeader().setStretchLastSection(True)
# make table read only, select a full row by default
self._table.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers)
self._table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
# catch double click events on table rows
self._table.cellDoubleClicked.connect(self._ui_cell_double_click)
def _populate_table(self):
"""
Populate the module table with the module names provided to this dialog.
"""
self._table.setSortingEnabled(False)
self._table.setRowCount(len(self._module_names))
for i, module_name in enumerate(self._module_names, 0):
self._table.setItem(i, 0, QtWidgets.QTableWidgetItem(module_name))
self._table.resizeRowsToContents()
self._table.setSortingEnabled(True)
def _ui_layout(self):
"""
Layout the major UI elements of the widget.
"""
layout = QtWidgets.QVBoxLayout()
#layout.setContentsMargins(0,0,0,0)
# layout child widgets
layout.addWidget(self._label_description)
layout.addWidget(self._table)
layout.addWidget(self._checkbox_remember)
layout.addWidget(self._checkbox_ignore_missing)
# scale widget dimensions based on DPI
height = int(get_dpi_scale() * 250)
width = int(get_dpi_scale() * 400)
self.setMinimumHeight(height)
self.setMinimumWidth(width)
# apply the widget layout
self.setLayout(layout)
#--------------------------------------------------------------------------
# Signal Handlers
#--------------------------------------------------------------------------
def _ui_cell_double_click(self, row, column):
"""
A cell/row has been double clicked in the module table.
"""
self.selected_name = self._table.item(row, 0).text()
self.accept()
+573
View File
@@ -0,0 +1,573 @@
import os
import json
import glob
import shutil
import logging
# NOTE: Py2/Py3 compat
try:
from json.decoder import JSONDecodeError
except ImportError:
JSONDecodeError = ValueError
from lighthouse.util.qt import *
from lighthouse.util.log import lmsg
from lighthouse.util.misc import *
from lighthouse.util.disassembler import disassembler
logger = logging.getLogger("Lighthouse.UI.Palette")
#------------------------------------------------------------------------------
# Plugin Color Palette
#------------------------------------------------------------------------------
class LighthousePalette(object):
"""
Color Palette for the Lighthouse plugin.
"""
def __init__(self):
"""
Initialize default palette colors for Lighthouse.
"""
self._initialized = False
self._last_directory = None
self._required_fields = []
# hints about the user theme (light/dark)
self._user_qt_hint = "dark"
self._user_disassembly_hint = "dark"
self.theme = None
self._default_themes = \
{
"dark": "synth.json",
"light": "dullien.json"
}
# list of objects requesting a callback after a theme change
self._theme_changed_callbacks = []
# get a list of required theme fields, for user theme validation
self._load_required_fields()
# initialize the user theme directory
self._populate_user_theme_dir()
# load a placeholder theme (unhinted) for inital Lighthoue bring-up
self._load_preferred_theme(True)
self._initialized = False
@staticmethod
def get_plugin_theme_dir():
"""
Return the Lighthouse plugin theme directory.
"""
return plugin_resource("themes")
@staticmethod
def get_user_theme_dir():
"""
Return the Lighthouse user theme directory.
"""
theme_directory = os.path.join(
disassembler.get_disassembler_user_directory(),
"lighthouse_themes"
)
return theme_directory
#----------------------------------------------------------------------
# Properties
#----------------------------------------------------------------------
@property
def TOKEN_COLORS(self):
"""
Return the palette of token colors.
"""
return \
{
# logic operators
"OR": self.logic_token,
"XOR": self.logic_token,
"AND": self.logic_token,
"MINUS": self.logic_token,
# misc
"COMMA": self.comma_token,
"LPAREN": self.paren_token,
"RPAREN": self.paren_token,
#"WS": self.whitepsace_token,
#"UNKNOWN": self.unknown_token,
# coverage
"COVERAGE_TOKEN": self.coverage_token,
}
#----------------------------------------------------------------------
# Callbacks
#----------------------------------------------------------------------
def theme_changed(self, callback):
"""
Subscribe a callback for theme change events.
"""
register_callback(self._theme_changed_callbacks, callback)
def _notify_theme_changed(self):
"""
Notify listeners of a theme change event.
"""
notify_callback(self._theme_changed_callbacks)
#----------------------------------------------------------------------
# Public
#----------------------------------------------------------------------
def warmup(self):
"""
Warms up the theming system prior to initial use.
"""
if self._initialized:
return
logger.debug("Warming up theme subsystem...")
#
# attempt to load the user's preferred (or hinted) theme. if we are
# successful, then there's nothing else to do!
#
self._refresh_theme_hints()
if self._load_preferred_theme():
self._initialized = True
logger.debug(" - warmup complete, using preferred theme!")
return
#
# failed to load the preferred theme... so delete the 'active'
# file (if there is one) and warn the user before falling back
#
try:
os.remove(os.path.join(self.get_user_theme_dir(), ".active_theme"))
except:
pass
disassembler.warning(
"Failed to load Lighthouse user theme!\n\n"
"Please check the console for more information..."
)
#
# if no theme is loaded, we will attempt to detect & load the in-box
# themes based on the user's disassembler theme
#
loaded = self._load_preferred_theme(fallback=True)
if not loaded:
lmsg("Could not load Lighthouse fallback theme!") # this is a bad place to be...
return
logger.debug(" - warmup complete, using hint-recommended theme!")
self._initialized = True
def interactive_change_theme(self):
"""
Open a file dialog and let the user select a new Lighthoue theme.
"""
# create & configure a Qt File Dialog for immediate use
file_dialog = QtWidgets.QFileDialog(
None,
"Open Lighthouse theme file",
self._last_directory,
"JSON Files (*.json)"
)
file_dialog.setFileMode(QtWidgets.QFileDialog.ExistingFile)
# prompt the user with the file dialog, and await filename(s)
filename, _ = file_dialog.getOpenFileName()
if not filename:
return
#
# ensure the user is only trying to load themes from the user theme
# directory as it helps ensure some of our intenal loading logic
#
file_dir = os.path.abspath(os.path.dirname(filename))
user_dir = os.path.abspath(self.get_user_theme_dir())
if file_dir != user_dir:
text = "Please install your Lighthouse theme into the user theme directory:\n\n" + user_dir
disassembler.warning(text)
return
#
# remember the last directory we were in (parsed from a selected file)
# for the next time the user comes to load coverage files
#
if filename:
self._last_directory = os.path.dirname(filename) + os.sep
# log the captured (selected) filenames from the dialog
logger.debug("Captured filename from theme file dialog: '%s'" % filename)
#
# before applying the selected lighthouse theme, we should ensure that
# we know if the user is using a light or dark disassembler theme as
# it may change which colors get used by the lighthouse theme
#
self._refresh_theme_hints()
# if the selected theme fails to load, throw a visible warning
if not self._load_theme(filename):
disassembler.warning(
"Failed to load Lighthouse user theme!\n\n"
"Please check the console for more information..."
)
return
# since everthing looks like it loaded okay, save this as the preferred theme
with open(os.path.join(self.get_user_theme_dir(), ".active_theme"), "w") as f:
f.write(filename)
def refresh_theme(self):
"""
Dynamically compute palette color based on IDA theme.
Depending on if IDA is using a dark or light theme, we *try*
to select colors that will hopefully keep things most readable.
"""
self._refresh_theme_hints()
self._load_preferred_theme()
#--------------------------------------------------------------------------
# Theme Internals
#--------------------------------------------------------------------------
def _populate_user_theme_dir(self):
"""
Create the Lighthouse user theme directory and install default themes.
"""
# create the user theme directory if it does not exist
user_theme_dir = self.get_user_theme_dir()
makedirs(user_theme_dir)
# enumerate all in-box / default themes
plugin_theme_dir = self.get_plugin_theme_dir()
json_files = glob.glob(os.path.join(plugin_theme_dir, "*.json"))
# copy the default themes into the user directory if they don't exist
for default_theme_file in json_files:
theme_name = os.path.basename(default_theme_file)
#
# check if lighthouse has copied the default themes into the user
# theme directory before. when 'default' themes exists, skip them
# rather than overwriting... as the user may have modified it
#
user_theme_file = os.path.join(user_theme_dir, theme_name)
if os.path.exists(user_theme_file):
continue
# copy the in-box themes to the user theme directory
shutil.copy(default_theme_file, user_theme_file)
#
# if the user tries to switch themes, ensure the file dialog will start
# in their user theme directory
#
self._last_directory = user_theme_dir
def _load_required_fields(self):
"""
Load the required theme fields from a donor in-box theme.
"""
logger.debug("Loading required theme fields from disk...")
# load a known-good theme from the plugin's in-box themes
filepath = os.path.join(self.get_plugin_theme_dir(), self._default_themes["dark"])
theme = self._read_theme(filepath)
#
# save all the defined fields in this 'good' theme as a ground truth
# to validate user themes against...
#
self._required_fields = theme["fields"].keys()
def _load_preferred_theme(self, fallback=False):
"""
Load the user's preferred theme, or the one hinted at by the theme subsystem.
"""
logger.debug("Loading preferred theme from disk...")
user_theme_dir = self.get_user_theme_dir()
# attempt te read the name of the user's active / preferred theme name
active_filepath = os.path.join(user_theme_dir, ".active_theme")
try:
theme_name = open(active_filepath).read().strip()
logger.debug(" - Got '%s' from .active_theme" % theme_name)
except (OSError, IOError):
theme_name = None
#
# if the user does not have a preferred theme set yet, we will try to
# pick one for them based on their disassembler UI.
#
if not theme_name:
#
# we have two themes hints which roughly correspond to the tone of
# their disassembly background, and then their general Qt widgets.
#
# if both themes seem to align on style (eg the user is using a
# 'dark' UI), then we will select the appropriate in-box theme
#
if self._user_qt_hint == self._user_disassembly_hint:
theme_name = self._default_themes[self._user_qt_hint]
logger.debug(" - No preferred theme, hints suggest theme '%s'" % theme_name)
#
# the UI hints don't match, so the user is using some ... weird
# mismatched theming in their disassembler. let's just default to
# the 'dark' lighthouse theme as it is more robust
#
else:
theme_name = self._default_themes["dark"]
#
# should the user themes be in a bad state, we can fallback to the
# in-box themes. this should only happen if users malform the default
# themes that have been copied into the user theme directory
#
if fallback:
theme_path = os.path.join(self.get_plugin_theme_dir(), theme_name)
else:
theme_path = os.path.join(self.get_user_theme_dir(), theme_name)
# finally, attempt to load & apply the theme -- return True/False
return self._load_theme(theme_path)
def _validate_theme(self, theme):
"""
Pefrom rudimentary theme validation.
"""
logger.debug(" - Validating theme fields for '%s'..." % theme["name"])
user_fields = theme.get("fields", None)
if not user_fields:
lmsg("Could not find theme 'fields' definition")
return False
# check that all the 'required' fields exist in the given theme
for field in self._required_fields:
if field not in user_fields:
lmsg("Could not find required theme field '%s'" % field)
return False
# theme looks good enough for now...
return True
def _load_theme(self, filepath):
"""
Load and apply the Lighthouse theme at the given filepath.
"""
# attempt to read json theme from disk
try:
theme = self._read_theme(filepath)
# reading file from dsik failed
except OSError:
lmsg("Could not open theme file at '%s'" % filepath)
return False
# JSON decoding failed
except JSONDecodeError as e:
lmsg("Failed to decode theme '%s' to json" % filepath)
lmsg(" - " + str(e))
return False
# do some basic sanity checking on the given theme file
if not self._validate_theme(theme):
return False
# try applying the loaded theme to Lighthouse
try:
self._apply_theme(theme)
except Exception as e:
lmsg("Failed to load Lighthouse user theme\n%s" % e)
return False
# return success
self._notify_theme_changed()
return True
def _read_theme(self, filepath):
"""
Parse the Lighthouse theme file from the given filepath.
"""
logger.debug(" - Reading theme file '%s'..." % filepath)
# attempt to load the theme file contents from disk
raw_theme = open(filepath, "r").read()
# convert the theme file contents to a json object/dict
theme = json.loads(raw_theme)
# all good
return theme
def _apply_theme(self, theme):
"""
Apply the given theme definition to Lighthouse.
"""
logger.debug(" - Applying theme '%s'..." % theme["name"])
colors = theme["colors"]
for field_name, color_entry in theme["fields"].items():
# color has 'light' and 'dark' variants
if isinstance(color_entry, list):
color_name = self._pick_best_color(field_name, color_entry)
# there is only one color defined
else:
color_name = color_entry
# load the color
color_value = colors[color_name]
color = QtGui.QColor(*color_value)
# set theme self.[field_name] = color
setattr(self, field_name, color)
# HACK: IDA uses BBGGRR for its databasse highlighting
if disassembler.NAME == "IDA":
rgb = int(self.coverage_paint.name()[1:], 16)
self.coverage_paint = swap_rgb(rgb)
# all done, save the theme in case we need it later
self.theme = theme
def _pick_best_color(self, field_name, color_entry):
"""
Given a variable color_entry, select the best color based on the theme hints.
"""
assert len(color_entry) == 2, "Malformed color entry, must be (dark, light)"
dark, light = color_entry
# coverage_paint is actually the only field that applies to disas...
if field_name == "coverage_paint":
if self._user_disassembly_hint == "dark":
return dark
else:
return light
# the rest of the fields should be considered 'qt' fields
if self._user_qt_hint == "dark":
return dark
return light
#--------------------------------------------------------------------------
# Theme Inference
#--------------------------------------------------------------------------
def _refresh_theme_hints(self):
"""
Peek at the UI context to infer what kind of theme the user might be using.
"""
self._user_qt_hint = self._qt_theme_hint()
self._user_disassembly_hint = self._disassembly_theme_hint() or "dark"
def _disassembly_theme_hint(self):
"""
Binary hint of the IDA color theme.
This routine returns a best effort hint as to what kind of theme is
in use for the IDA Views (Disas, Hex, HexRays, etc).
Returns 'dark' or 'light' indicating the user's theme
"""
#
# determine whether to use a 'dark' or 'light' paint based on the
# background color of the user's IDA text based windows
#
bg_color = disassembler.get_disassembly_background_color()
if not bg_color:
logger.debug(" - Failed to get hint for disassembly background...")
return None
# return 'dark' or 'light'
return test_color_brightness(bg_color)
def _qt_theme_hint(self):
"""
Binary hint of the Qt color theme.
This routine returns a best effort hint as to what kind of theme the
QtWdigets throughout IDA are using. This is to accomodate for users
who may be using Zyantific's IDASkins plugins (or others) to further
customize IDA's appearance.
Returns 'dark' or 'light' indicating the user's theme
"""
#
# to determine what kind of Qt based theme IDA is using, we create a
# test widget and check the colors put into the palette the widget
# inherits from the application (eg, IDA).
#
test_widget = QtWidgets.QWidget()
#
# in order to 'realize' the palette used to render (draw) the widget,
# it first must be made visible. since we don't want to be popping
# random widgets infront of the user, so we set this attribute such
# that we can silently bake the widget colors.
#
# NOTE/COMPAT: WA_DontShowOnScreen
#
# https://www.riverbankcomputing.com/news/pyqt-56
#
# lmao, don't ask me why they forgot about this attribute from 5.0 - 5.6
#
# IDA 9.2 SDK fix: migrate from PyQt5 to PySide6
# https://docs.hex-rays.com/user-guide/plugins/migrating-pyqt5-code-to-pyside6
if disassembler.NAME == "BINJA" or (disassembler.NAME == "IDA"
and disassembler._version_major == 9
and disassembler._version_minor >= 2):
test_widget.setAttribute(QtCore.Qt.WA_DontShowOnScreen)
else:
test_widget.setAttribute(103) # taken from http://doc.qt.io/qt-5/qt.html
# render the (invisible) widget
test_widget.show()
# now we farm the background color from the qwidget
bg_color = test_widget.palette().color(QtGui.QPalette.Window)
# 'hide' & delete the widget
test_widget.hide()
test_widget.deleteLater()
# return 'dark' or 'light'
return test_color_brightness(bg_color)
Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Before

Width:  |  Height:  |  Size: 8.1 KiB

After

Width:  |  Height:  |  Size: 8.1 KiB

Before

Width:  |  Height:  |  Size: 8.1 KiB

After

Width:  |  Height:  |  Size: 8.1 KiB

@@ -0,0 +1,60 @@
{
"name": "Dullien",
"colors":
{
"black": [0, 0, 0],
"white": [255, 255, 255],
"gray": [100, 100, 100],
"lightGray": [220, 220, 220],
"red": [255, 0, 0],
"blue": [0, 0, 255],
"lightRed": [240, 150, 150],
"lightGreen": [150, 240, 150],
"darkGreen": [0, 60, 0],
"lightBlue": [140, 170, 220]
},
"fields":
{
"coverage_paint": ["darkGreen", "lightGreen"],
"table_text": "black",
"table_grid": "gray",
"table_coverage_none": "lightRed",
"table_coverage_bad": "lightRed",
"table_coverage_good": "lightGreen",
"table_background": "white",
"table_selection": "lightBlue",
"html_summary_text": "gray",
"html_table_header": "lightGray",
"html_page_background": "white",
"shell_text": "black",
"shell_text_valid": "blue",
"shell_text_invalid": "red",
"shell_highlight_invalid": "lightRed",
"shell_border": "gray",
"shell_border_focus": "lightBlue",
"shell_background": "white",
"shell_hint_text": "black",
"shell_hint_background": "white",
"logic_token": "red",
"comma_token": "black",
"paren_token": "black",
"coverage_token": "blue",
"combobox_text": "black",
"combobox_selection_text": "white",
"combobox_selection_background": "lightBlue",
"combobox_border": "gray",
"combobox_border_focus": "lightBlue",
"combobox_background": "white"
}
}
@@ -0,0 +1,69 @@
{
"name": "Long Night",
"author": "https://github.com/ioncodes",
"colors":
{
"black": [33, 33, 33],
"white": [241, 239, 236],
"darkGray": [20, 20, 20],
"darkGray2": [30, 30, 30],
"darkGray3": [54, 54, 54],
"gray": [100, 100, 100],
"lightGray": [55, 55, 55],
"red": [188, 101, 141],
"green": [64, 255, 64],
"blue": [104, 134, 197],
"lightBlue": [128, 200, 255],
"darkBlue": [44, 44, 44],
"purple": [121, 104, 197],
"focusRed": [255, 83, 112],
"selection": [67, 67, 67]
},
"fields":
{
"coverage_paint": ["darkBlue", "lightBlue"],
"table_text": "white",
"table_grid": "black",
"table_coverage_none": "black",
"table_coverage_bad": "red",
"table_coverage_good": "blue",
"table_background": "black",
"table_selection": "purple",
"html_summary_text": "white",
"html_table_header": "white",
"html_page_background": "black",
"shell_text": "white",
"shell_text_valid": "lightBlue",
"shell_text_invalid": "red",
"shell_highlight_invalid": "red",
"shell_border": "lightGray",
"shell_border_focus": "focusRed",
"shell_background": "black",
"shell_hint_text": "white",
"shell_hint_background": "black",
"logic_token": "red",
"comma_token": "green",
"paren_token": "green",
"coverage_token": "lightBlue",
"combobox_text": "white",
"combobox_selection_text": "white",
"combobox_selection_background": "selection",
"combobox_border": "lightGray",
"combobox_border_focus": "focusRed",
"combobox_background": "black"
}
}
@@ -0,0 +1,66 @@
{
"name": "Synth",
"colors":
{
"black": [0, 0, 0],
"white": [255, 255, 255],
"darkGray": [20, 20, 20],
"darkGray2": [30, 30, 30],
"darkGray3": [54, 54, 54],
"gray": [100, 100, 100],
"lightGray": [180, 180, 180],
"red": [221, 0, 0],
"green": [64, 255, 64],
"blue": [51, 153, 255],
"lightBlue": [128, 200, 255],
"darkBlue": [0, 0, 153],
"purple": [100, 0, 130]
},
"fields":
{
"coverage_paint": ["darkBlue", "lightBlue"],
"table_text": "white",
"table_grid": "black",
"table_coverage_none": "darkGray2",
"table_coverage_bad": "red",
"table_coverage_good": "blue",
"table_background": "darkGray",
"table_selection": "purple",
"html_summary_text": "lightGray",
"html_table_header": "gray",
"html_page_background": "darkGray3",
"shell_text": "white",
"shell_text_valid": "lightBlue",
"shell_text_invalid": "red",
"shell_highlight_invalid": "red",
"shell_border": "gray",
"shell_border_focus": "lightGray",
"shell_background": "darkGray2",
"shell_hint_text": "white",
"shell_hint_background": "darkGray3",
"logic_token": "red",
"comma_token": "green",
"paren_token": "green",
"coverage_token": "lightBlue",
"combobox_text": "white",
"combobox_selection_text": "white",
"combobox_selection_background": "blue",
"combobox_border": "gray",
"combobox_border_focus": "lightGray",
"combobox_background": "darkGray2"
}
}
+4
View File
@@ -0,0 +1,4 @@
from .python import *
from .misc import *
from .debug import *
from .log import lmsg, logging_started, start_logging
+97
View File
@@ -0,0 +1,97 @@
import sys
import inspect
import cProfile
import traceback
from .log import lmsg
from .disassembler import disassembler
#------------------------------------------------------------------------------
# Debug
#------------------------------------------------------------------------------
#
# This file contains random snippets of code that I frequently use while
# developing and debugging parts of lighthouse. I don't expect any of this
# code to be active or in use for major releases.
#
#------------------------------------------------------------------------------
# Call Profiling
#------------------------------------------------------------------------------
pr = cProfile.Profile()
def profile(func):
"""
A simple function profiling decorator.
"""
def wrap(*args, **kwargs):
global pr
pr.enable()
result = func(*args, **kwargs)
pr.disable()
pr.print_stats(sort="tottime")
return result
return wrap
#------------------------------------------------------------------------------
# Function Line Profiling
#------------------------------------------------------------------------------
try:
import pprofile
def line_profile(func):
def profiled_func(*args, **kwargs):
try:
profiler = pprofile.ThreadProfile()
with profiler():
return func(*args, **kwargs)
finally:
caller_file = inspect.getfile(func)
profiler.annotate(pprofile.EncodeOrReplaceWriter(sys.stdout), [caller_file])
return profiled_func
except ImportError:
def line_profile(func):
def nothing(*args, **kwargs):
return func(*args, **kwargs)
return nothing
#------------------------------------------------------------------------------
# Error Logging
#------------------------------------------------------------------------------
def catch_errors(func):
"""
A simple catch-all decorator to try and log Lighthouse crashes.
This will be used to wrap high-risk or new code, in an effort to catch
and fix bugs without leaving the user in a stuck state.
"""
def wrap(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception:
exc_type, exc_value, exc_traceback = sys.exc_info()
st = traceback.format_stack()[:-1]
ex = traceback.format_exception(exc_type, exc_value, exc_traceback)[2:]
# log full crashing callstack to console
full_error = st + ex
full_error = ''.join(full_error).splitlines()
lmsg("Lighthouse experienced an error... please file an issue on GitHub with this traceback:")
lmsg("")
for line in full_error:
lmsg(line)
# notify the user that a bug occurred
disassembler.warning(
"Something bad happend to Lighthouse :-(\n\n" \
"Please file an issue on GitHub with the traceback from your disassembler console."
)
return wrap
@@ -0,0 +1,43 @@
#--------------------------------------------------------------------------
# Disassembler API Selector
#--------------------------------------------------------------------------
#
# this file will select and load the shimmed disassembler API for the
# appropriate (current) disassembler platform.
#
# see api.py for more details regarding this API shim layer
#
disassembler = None
#--------------------------------------------------------------------------
# IDA API Shim
#--------------------------------------------------------------------------
if disassembler == None:
try:
from .ida_api import IDACoreAPI, IDAContextAPI
disassembler = IDACoreAPI()
DisassemblerContextAPI = IDAContextAPI
except ImportError:
pass
#--------------------------------------------------------------------------
# Binary Ninja API Shim
#--------------------------------------------------------------------------
if disassembler == None:
try:
from .binja_api import BinjaCoreAPI, BinjaContextAPI
disassembler = BinjaCoreAPI()
DisassemblerContextAPI = BinjaContextAPI
except ImportError:
pass
#--------------------------------------------------------------------------
# Unknown Disassembler
#--------------------------------------------------------------------------
if disassembler == None:
raise NotImplementedError("Unknown or unsupported disassembler!")
+466
View File
@@ -0,0 +1,466 @@
import abc
import logging
from ..qt import QT_AVAILABLE, QtGui, QtWidgets
logger = logging.getLogger("Lighthouse.API")
#------------------------------------------------------------------------------
# Disassembler API
#------------------------------------------------------------------------------
#
# the purpose of this file is to provide an abstraction layer for the more
# generic disassembler APIs required by the plugin codebase. we strive to
# use (or extend) this API for the bulk of our disassembler operations,
# making the plugin as disassembler-agnostic as possible.
#
# by subclassing the templated classes below, the plugin can support other
# disassembler plaforms relatively easily. at the moment, implementing these
# subclasses is ~50% of the work that is required to add lighthouse support
# to any given interactive disassembler.
#
class DisassemblerCoreAPI(object):
"""
An abstract implementation of the core disassembler APIs.
"""
__metaclass__ = abc.ABCMeta
# the name of the disassembler framework, eg 'IDA' or 'BINJA'
NAME = NotImplemented
@abc.abstractmethod
def __init__(self):
self._ctxs = {}
# required version fields
self._version_major = NotImplemented
self._version_minor = NotImplemented
self._version_patch = NotImplemented
if not self.headless and QT_AVAILABLE:
from ..qt import WaitBox
self._waitbox = WaitBox("Please wait...")
else:
self._waitbox = None
def __delitem__(self, key):
del self._ctxs[key]
def __getitem__(self, key):
return self._ctxs[key]
def __setitem__(self, key, value):
self._ctxs[key] = value
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
def version_major(self):
"""
Return the major version number of the disassembler framework.
"""
assert self._version_major != NotImplemented
return self._version_major
def version_minor(self):
"""
Return the minor version number of the disassembler framework.
"""
assert self._version_patch != NotImplemented
return self._version_patch
def version_patch(self):
"""
Return the patch version number of the disassembler framework.
"""
assert self._version_patch != NotImplemented
return self._version_patch
@abc.abstractproperty
def headless(self):
"""
Return a bool indicating if the disassembler is running without a GUI.
"""
pass
#--------------------------------------------------------------------------
# Synchronization Decorators
#--------------------------------------------------------------------------
@staticmethod
def execute_read(function):
"""
Thread-safe function decorator to READ from the disassembler database.
"""
raise NotImplementedError("execute_read() has not been implemented")
@staticmethod
def execute_write(function):
"""
Thread-safe function decorator to WRITE to the disassembler database.
"""
raise NotImplementedError("execute_write() has not been implemented")
@staticmethod
def execute_ui(function):
"""
Thread-safe function decorator to perform UI disassembler actions.
This function is generally used for executing UI (Qt) events from
a background thread. as such, your implementation is expected to
transfer execution to the main application thread where it is safe to
perform Qt actions.
"""
raise NotImplementedError("execute_ui() has not been implemented")
#--------------------------------------------------------------------------
# Disassembler Universal APIs
#--------------------------------------------------------------------------
@abc.abstractmethod
def get_disassembler_user_directory(self):
"""
Return the 'user' directory for the disassembler.
"""
pass
@abc.abstractmethod
def get_disassembly_background_color(self):
"""
Return the background color of the disassembly text view.
"""
pass
@abc.abstractmethod
def is_msg_inited(self):
"""
Return a bool if the disassembler output window is initialized.
"""
pass
def warning(self, text):
"""
Display a warning dialog box with the given text.
"""
msgbox = QtWidgets.QMessageBox()
before = msgbox.sizeHint().width()
msgbox.setIcon(QtWidgets.QMessageBox.Critical)
after = msgbox.sizeHint().width()
icon_width = after - before
msgbox.setWindowTitle("Lighthouse Warning")
msgbox.setText(text)
font = msgbox.font()
fm = QtGui.QFontMetricsF(font)
text_width = fm.size(0, text).width()
# don't ask...
spacer = QtWidgets.QSpacerItem(int(text_width*1.1 + icon_width), 0, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
layout = msgbox.layout()
layout.addItem(spacer, layout.rowCount(), 0, 1, layout.columnCount())
msgbox.setLayout(layout)
# show the dialog
msgbox.exec_()
@abc.abstractmethod
def message(self, function_address, new_name):
"""
Print a message to the disassembler console.
"""
pass
#--------------------------------------------------------------------------
# UI APIs
#--------------------------------------------------------------------------
#
# NOTE: please note, these APIs and their usage is a little ... obtuse.
# this is primarily because the IDA & Binja dockable widget management
# system is rather different.
#
# these APIs make a best effort in unifiying the systems in a manner that
# works for this project. it may not be ideal for the universal use case
# but is good enough for our purposes.
#
@abc.abstractmethod
def register_dockable(self, dockable_name, create_widget_callback):
"""
Register a callback with the disassembler to generate dockable widgets.
- dockable_name: the name of the window / dockable to be created
- create_widget_callback: a static function that return a new dockable widget
The registered callback will be called automatically in certain events
that will preclude the display of the dockable_name. These events
may include a new databse being opened, or show_dockable being called.
"""
pass
@abc.abstractmethod
def create_dockable_widget(self, parent, dockable_name):
"""
Creates a dockable widget.
This function should generally be called within the create_widget_callback
described in register_dockable(...).
"""
pass
@abc.abstractmethod
def show_dockable(self, dockable_name):
"""
Show the named dockable widget.
"""
pass
@abc.abstractmethod
def hide_dockable(self, dockable_name):
"""
Hide the named dockable widget.
"""
pass
#------------------------------------------------------------------------------
# WaitBox API
#------------------------------------------------------------------------------
def show_wait_box(self, text, modal=True):
"""
Show the disassembler universal WaitBox.
"""
assert QT_AVAILABLE, "This function can only be used in a Qt runtime"
self._waitbox.set_text(text)
self._waitbox.show(modal)
def hide_wait_box(self):
"""
Hide the disassembler universal WaitBox.
"""
assert QT_AVAILABLE, "This function can only be used in a Qt runtime"
self._waitbox.hide()
def replace_wait_box(self, text):
"""
Replace the text in the disassembler universal WaitBox.
"""
assert QT_AVAILABLE, "This function can only be used in a Qt runtime"
self._waitbox.set_text(text)
#------------------------------------------------------------------------------
# Disassembler Contextual API
#------------------------------------------------------------------------------
class DisassemblerContextAPI(object):
"""
An abstract implementation of database/contextual disassembler APIs.
"""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def __init__(self, dctx):
self.dctx = dctx
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@abc.abstractproperty
def busy(self):
"""
Return a bool indicating if the disassembler is busy / processing.
"""
pass
#--------------------------------------------------------------------------
# API Shims
#--------------------------------------------------------------------------
@abc.abstractmethod
def get_current_address(self):
"""
Return the current cursor address in the open database.
"""
pass
@abc.abstractmethod
def get_database_directory(self):
"""
Return the directory for the open database.
"""
pass
@abc.abstractmethod
def get_function_addresses(self):
"""
Return all defined function addresses in the open database.
"""
pass
@abc.abstractmethod
def get_function_name_at(self, address):
"""
Return the name of the function at the given address.
This is generally the user-facing/demangled name seen throughout the
disassembler and is probably what you want to use for almost everything.
"""
pass
@abc.abstractmethod
def get_function_raw_name_at(self, address):
"""
Return the raw (eg, unmangled) name of the function at the given address.
On the backend, most disassemblers store what is called the 'true' or
'raw' (eg, unmangled) function name.
"""
pass
@abc.abstractmethod
def get_imagebase(self):
"""
Return the base address of the open database.
"""
pass
@abc.abstractmethod
def get_root_filename(self):
"""
Return the root executable (file) name used to generate the database.
"""
pass
@abc.abstractmethod
def navigate(self, address, function_address=None):
"""
Jump the disassembler UI to the given address.
"""
pass
@abc.abstractmethod
def navigate_to_function(self, function_address, address):
"""
Jump the disassembler UI to the given address, within a function.
"""
pass
@abc.abstractmethod
def set_function_name_at(self, function_address, new_name):
"""
Set the function name at given address.
"""
pass
#--------------------------------------------------------------------------
# Hooks API
#--------------------------------------------------------------------------
@abc.abstractmethod
def create_rename_hooks(self, function_address, new_name):
"""
Returns a hooking object that can capture rename events for this context.
"""
pass
#--------------------------------------------------------------------------
# Function Prefix API
#--------------------------------------------------------------------------
#
# the following APIs are used to apply or clear prefixes to multiple
# functions in the disassembly database. the only thing you're expected
# to do here is select an appropriate PREFIX_SEPARATOR.
#
# your prefix separator is expected to be something unique, that a user
# would probably *never* put into their function name themselves but
# looks somewhat normal.
#
# in IDA, putting '%' in a function name appears as '_' in the function
# list, so we use that as a prefix separator. in Binary Ninja, we use a
# unicode character that looks like an underscore character.
#
# it is probably safe to steal the unicode char we use with binja for
# your own implementation.
#
PREFIX_SEPARATOR = NotImplemented
def prefix_function(self, function_address, prefix):
"""
Prefix a function name with the given string.
"""
original_name = self.get_function_raw_name_at(function_address)
new_name = str(prefix) + self.PREFIX_SEPARATOR + str(original_name)
# rename the function with the newly prefixed name
self.set_function_name_at(function_address, new_name)
def prefix_functions(self, function_addresses, prefix):
"""
Prefix a list of functions with the given string.
"""
for function_address in function_addresses:
self.prefix_function(function_address, prefix)
def clear_prefix(self, function_address):
"""
Clear the prefix from a given function.
"""
prefixed_name = self.get_function_raw_name_at(function_address)
#
# split the function name on the last prefix separator, saving
# everything that comes after (eg, the original func name)
#
new_name = prefixed_name.rsplit(self.PREFIX_SEPARATOR)[-1]
# the name doesn't appear to have had a prefix, nothing to do...
if new_name == prefixed_name:
return
# rename the function with the prefix(s) now stripped
self.set_function_name_at(function_address, new_name)
def clear_prefixes(self, function_addresses):
"""
Clear the prefix from a list of given functions.
"""
for function_address in function_addresses:
self.clear_prefix(function_address)
#------------------------------------------------------------------------------
# Hooking
#------------------------------------------------------------------------------
class RenameHooks(object):
"""
An abstract implementation of disassembler hooks to capture rename events.
"""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def hook(self):
"""
Install hooks into the disassembler that capture rename events.
"""
pass
@abc.abstractmethod
def unhook(self):
"""
Remove hooks used to capture rename events.
"""
pass
def renamed(self, address, new_name):
"""
This will be hooked by Lighthouse at runtime to capture rename events.
"""
pass
@@ -0,0 +1,423 @@
# -*- coding: utf-8 -*-
import os
import sys
import logging
import functools
import threading
import collections
from .api import DisassemblerCoreAPI, DisassemblerContextAPI
from ..qt import *
from ..misc import is_mainthread, not_mainthread
import binaryninja
from binaryninja import PythonScriptingInstance, binaryview
from binaryninja.plugin import BackgroundTaskThread
logger = logging.getLogger("Lighthouse.API.Binja")
#------------------------------------------------------------------------------
# Utils
#------------------------------------------------------------------------------
def execute_sync(function):
"""
Synchronize with the disassembler for safe database access.
"""
@functools.wraps(function)
def wrapper(*args, **kwargs):
#
# in Binary Ninja, it is only safe to access the BNDB from a thread
# that is *not* the mainthread. if we appear to already be in a
# background thread of some sort, simply execute the given function
#
if not is_mainthread():
return function(*args, **kwargs)
#
# if we are in the mainthread, we need to schedule a background
# task to perform our database task/function instead
#
# this inline function definition is technically what will execute
# in a database-safe background thread. we use this thunk to
# capture any output the function may want to return to the user.
#
output = [None]
def thunk():
output[0] = function(*args, **kwargs)
return 1
class DatabaseRead(BackgroundTaskThread):
"""
A stub task to safely read from the BNDB.
"""
def __init__(self, text, function):
super(DatabaseRead, self).__init__(text, False)
self._task_to_run = function
def run(self):
self._task_to_run()
self.finish()
# schedule the databases read and wait for its completion
t = DatabaseRead("Accessing database...", thunk)
t.start()
t.join()
# return the output of the synchronized execution / read
return output[0]
return wrapper
#------------------------------------------------------------------------------
# Disassembler API
#------------------------------------------------------------------------------
class BinjaCoreAPI(DisassemblerCoreAPI):
NAME = "BINJA"
def __init__(self):
super(BinjaCoreAPI, self).__init__()
self._init_version()
def _init_version(self):
version_string = binaryninja.core_version()
# retrieve Binja's version #
if "-" in version_string: # dev
disassembler_version = version_string.split("-", 1)[0]
else: # commercial, personal
disassembler_version = version_string.split(" ", 1)[0]
major, minor, patch, *_= disassembler_version.split(".") + ['0']
# save the version number components for later use
self._version_major = major
self._version_minor = minor
self._version_patch = patch
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def headless(self):
return not(binaryninja.core_ui_enabled())
#--------------------------------------------------------------------------
# Synchronization Decorators
#--------------------------------------------------------------------------
@staticmethod
def execute_read(function):
return execute_sync(function)
@staticmethod
def execute_write(function):
return execute_sync(function)
@staticmethod
def execute_ui(function):
@functools.wraps(function)
def wrapper(*args, **kwargs):
ff = functools.partial(function, *args, **kwargs)
# if we are already in the main (UI) thread, execute now
if is_mainthread():
ff()
return
# schedule the task to run in the main thread
binaryninja.execute_on_main_thread(ff)
return wrapper
#--------------------------------------------------------------------------
# API Shims
#--------------------------------------------------------------------------
def get_disassembler_user_directory(self):
return os.path.split(binaryninja.user_plugin_path())[0]
def get_disassembly_background_color(self):
return binaryninjaui.getThemeColor(binaryninjaui.ThemeColor.LinearDisassemblyBlockColor)
def is_msg_inited(self):
return True
@execute_ui.__func__
def warning(self, text):
super(BinjaCoreAPI, self).warning(text)
def message(self, message):
print(message)
#--------------------------------------------------------------------------
# UI API Shims
#--------------------------------------------------------------------------
def register_dockable(self, dockable_name, create_widget_callback):
dock_handler = DockHandler.getActiveDockHandler()
dock_handler.addDockWidget(dockable_name, create_widget_callback, QtCore.Qt.RightDockWidgetArea, QtCore.Qt.Horizontal, False)
def create_dockable_widget(self, parent, dockable_name):
return DockableWidget(parent, dockable_name)
def show_dockable(self, dockable_name):
dock_handler = DockHandler.getActiveDockHandler()
dock_handler.setVisible(dockable_name, True)
def hide_dockable(self, dockable_name):
dock_handler = DockHandler.getActiveDockHandler()
dock_handler.setVisible(dockable_name, False)
#--------------------------------------------------------------------------
# XXX Binja Specfic Helpers
#--------------------------------------------------------------------------
def binja_get_bv_from_dock(self):
dh = DockHandler.getActiveDockHandler()
if not dh:
return None
vf = dh.getViewFrame()
if not vf:
return None
vi = vf.getCurrentViewInterface()
bv = vi.getData()
return bv
class BinjaContextAPI(DisassemblerContextAPI):
def __init__(self, dctx):
super(BinjaContextAPI, self).__init__(dctx)
self.bv = dctx
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def busy(self):
return self.bv.analysis_info.state != binaryninja.enums.AnalysisState.IdleState
#--------------------------------------------------------------------------
# API Shims
#--------------------------------------------------------------------------
def get_current_address(self):
# TODO/V35: this doen't work because of the loss of context bug...
#ctx = UIContext.activeContext()
#ah = ctx.contentActionHandler()
#ac = ah.actionContext()
#return ac.address
dh = DockHandler.getActiveDockHandler()
if not dh:
return 0
vf = dh.getViewFrame()
if not vf:
return 0
ac = vf.actionContext()
if not ac:
return 0
return ac.address
@BinjaCoreAPI.execute_read
def get_database_directory(self):
return os.path.dirname(self.bv.file.filename)
@not_mainthread
def get_function_addresses(self):
return [x.start for x in self.bv.functions]
def get_function_name_at(self, address):
func = self.bv.get_function_at(address)
if not func:
return None
return func.symbol.short_name
@BinjaCoreAPI.execute_read
def get_function_raw_name_at(self, address):
func = self.bv.get_function_at(address)
if not func:
return None
return func.name
@not_mainthread
def get_imagebase(self):
return self.bv.start
@not_mainthread
def get_root_filename(self):
return os.path.basename(self.bv.file.original_filename)
def navigate(self, address):
return self.bv.navigate(self.bv.view, address)
def navigate_to_function(self, function_address, address):
#
# attempt a more 'precise' jump, that guarantees to place us within
# the given function. this is necessary when trying to jump to an
# an address/node that is shared between two functions
#
funcs = self.bv.get_functions_containing(address)
if not funcs:
return False
#
# try to find the function that contains our target (address) and has
# a matching function start...
#
for func in funcs:
if func.start == function_address:
break
# no matching function ???
else:
return False
dh = DockHandler.getActiveDockHandler()
vf = dh.getViewFrame()
vi = vf.getCurrentViewInterface()
return vi.navigateToFunction(func, address)
def set_function_name_at(self, function_address, new_name):
func = self.bv.get_function_at(function_address)
if not func:
return
if new_name == "":
new_name = None
state = self.bv.begin_undo_actions()
func.name = new_name
self.bv.commit_undo_actions(state)
#--------------------------------------------------------------------------
# Hooks API
#--------------------------------------------------------------------------
def create_rename_hooks(self):
return RenameHooks(self.bv)
#------------------------------------------------------------------------------
# Function Prefix API
#------------------------------------------------------------------------------
PREFIX_SEPARATOR = "" # Unicode 0x2581
#------------------------------------------------------------------------------
# Hooking
#------------------------------------------------------------------------------
class RenameHooks(binaryview.BinaryDataNotification):
"""
A hooking class to catch symbol changes in Binary Ninja.
"""
def __init__(self, bv):
self._bv = bv
def hook(self):
self._bv.register_notification(self)
def unhook(self):
self._bv.unregister_notification(self)
def symbol_added(self, *args):
self.__symbol_handler(*args)
def symbol_updated(self, *args):
self.__symbol_handler(*args)
def symbol_removed(self, *args):
self.__symbol_handler(*args, True)
def __symbol_handler(self, view, symbol, removed=False):
func = self._bv.get_function_at(symbol.address)
if not func or not func.start == symbol.address:
return
if removed:
self.name_changed(symbol.address, "sub_%x" % symbol.address)
else:
self.name_changed(symbol.address, symbol.name)
def name_changed(self, address, name):
"""
A placeholder callback, which will get hooked / replaced once live.
"""
pass
#------------------------------------------------------------------------------
# UI
#------------------------------------------------------------------------------
if QT_AVAILABLE:
import binaryninjaui
from binaryninjaui import DockHandler, DockContextHandler, UIContext, UIActionHandler
class DockableWidget(QtWidgets.QWidget, DockContextHandler):
"""
A dockable Qt widget for Binary Ninja.
"""
def __init__(self, parent, name):
QtWidgets.QWidget.__init__(self, parent)
DockContextHandler.__init__(self, self, name)
self.actionHandler = UIActionHandler()
self.actionHandler.setupActionHandler(self)
self._active_view = None
self._visible_for_view = collections.defaultdict(lambda: False)
@property
def visible(self):
return self._visible_for_view[self._active_view]
@visible.setter
def visible(self, is_visible):
self._visible_for_view[self._active_view] = is_visible
def shouldBeVisible(self, view_frame):
if not view_frame:
return False
if USING_PYSIDE6:
import shiboken6 as shiboken
else:
import shiboken2 as shiboken
vf_ptr = shiboken.getCppPointer(view_frame)[0]
return self._visible_for_view[vf_ptr]
def notifyVisibilityChanged(self, is_visible):
self.visible = is_visible
def notifyViewChanged(self, view_frame):
if not view_frame:
self._active_view = None
return
if USING_PYSIDE6:
import shiboken6 as shiboken
else:
import shiboken2 as shiboken
self._active_view = shiboken.getCppPointer(view_frame)[0]
if self.visible:
dock_handler = DockHandler.getActiveDockHandler()
dock_handler.setVisible(self.m_name, True)
@@ -0,0 +1,613 @@
import os
import sys
import time
import logging
import binascii
import tempfile
import functools
import idaapi
import idautils
if int(idaapi.get_kernel_version()[0]) < 7:
idaapi.warning("Lighthouse has deprecated support for IDA 6, please upgrade.")
raise ImportError
from .api import DisassemblerCoreAPI, DisassemblerContextAPI
from ..qt import *
from ..misc import is_mainthread, get_string_between
logger = logging.getLogger("Lighthouse.API.IDA")
#------------------------------------------------------------------------------
# Utils
#------------------------------------------------------------------------------
def execute_sync(function, sync_type):
"""
Synchronize with the disassembler for safe database access.
Modified from https://github.com/vrtadmin/FIRST-plugin-ida
"""
@functools.wraps(function)
def wrapper(*args, **kwargs):
output = [None]
#
# this inline function definition is technically what will execute
# in the context of the main thread. we use this thunk to capture
# any output the function may want to return to the user.
#
def thunk():
output[0] = function(*args, **kwargs)
return 1
if is_mainthread():
thunk()
else:
idaapi.execute_sync(thunk, sync_type)
# return the output of the synchronized execution
return output[0]
return wrapper
#------------------------------------------------------------------------------
# Disassembler Core API (universal)
#------------------------------------------------------------------------------
class IDACoreAPI(DisassemblerCoreAPI):
NAME = "IDA"
def __init__(self):
super(IDACoreAPI, self).__init__()
self._dockable_factory = {}
self._dockable_widgets = {}
self._init_version()
def _init_version(self):
# retrieve IDA's version #
disassembler_version = idaapi.get_kernel_version()
major, minor = map(int, disassembler_version.split("."))
# save the version number components for later use
self._version_major = major
self._version_minor = minor
self._version_patch = 0
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def headless(self):
return idaapi.cvar.batch
#--------------------------------------------------------------------------
# Synchronization Decorators
#--------------------------------------------------------------------------
@staticmethod
def execute_read(function):
return execute_sync(function, idaapi.MFF_READ)
@staticmethod
def execute_write(function):
return execute_sync(function, idaapi.MFF_WRITE)
@staticmethod
def execute_ui(function):
return execute_sync(function, idaapi.MFF_FAST)
#--------------------------------------------------------------------------
# API Shims
#--------------------------------------------------------------------------
def get_disassembler_user_directory(self):
return idaapi.get_user_idadir()
def get_disassembly_background_color(self):
"""
Get the background color of the IDA disassembly view.
Since there is no supported way to probe the palette & colors in use by
IDA, we must get creative. This function attempts to locate an IDA
disassembly view, and take a screenshot of said widget. It will then
attempt to extract the color of a single background pixel (hopefully).
"""
# method one
color = self._get_ida_bg_color_from_file()
if color:
return color
# method two, fallback
color = self._get_ida_bg_color_from_view()
if not color:
return None
# return the found background color
return color
def is_msg_inited(self):
return idaapi.is_msg_inited()
@execute_ui.__func__
def warning(self, text):
super(IDACoreAPI, self).warning(text)
@execute_ui.__func__
def message(self, message):
print(message)
#--------------------------------------------------------------------------
# UI API Shims
#--------------------------------------------------------------------------
def register_dockable(self, dockable_name, create_widget_callback):
self._dockable_factory[dockable_name] = create_widget_callback
def create_dockable_widget(self, parent, dockable_name):
# IDA 9.2 SDK fix: migrate from PyQt5 to PySide6
# https://docs.hex-rays.com/user-guide/plugins/migrating-pyqt5-code-to-pyside6
if USING_PYSIDE6:
from shiboken6 import wrapInstance
else:
import sip
def wrapInstance(ptr, base=None):
return sip.wrapinstance(int(ptr), base)
# create a dockable widget, and save a reference to it for later use
twidget = idaapi.create_empty_widget(dockable_name)
self._dockable_widgets[dockable_name] = twidget
# cast the IDA 'twidget' as a Qt widget for use
widget = wrapInstance(int(twidget), QtWidgets.QWidget)
widget.name = dockable_name
widget.visible = False
# return the dockable QtWidget / container
return widget
def show_dockable(self, dockable_name):
try:
make_dockable = self._dockable_factory[dockable_name]
except KeyError:
return False
parent, dctx = None, None # not used for IDA's integration
widget = make_dockable(dockable_name, parent, dctx)
# get the original twidget, so we can use it with the IDA API's
#twidget = idaapi.TWidget__from_ptrval__(widget) NOTE: IDA 7.2+ only...
twidget = self._dockable_widgets.pop(dockable_name)
if not twidget:
self.warning("Could not open dockable window, because its reference is gone?!?")
return
# show the dockable widget
flags = idaapi.PluginForm.WOPN_TAB | idaapi.PluginForm.WOPN_RESTORE | idaapi.PluginForm.WOPN_PERSIST
idaapi.display_widget(twidget, flags)
widget.visible = True
# attempt to 'dock' the widget in a reasonable location
for target in ["IDA View-A", "Pseudocode-A"]:
dwidget = idaapi.find_widget(target)
if dwidget:
idaapi.set_dock_pos(dockable_name, 'IDA View-A', idaapi.DP_RIGHT)
break
def hide_dockable(self, dockable_name):
pass # TODO/IDA: this should never actually be called by lighthouse right now
#--------------------------------------------------------------------------
# Theme Prediction Helpers (Internal)
#--------------------------------------------------------------------------
def _get_ida_bg_color_from_file(self):
"""
Get the background color of the IDA disassembly views via HTML export.
"""
logger.debug("Attempting to get IDA disassembly background color from HTML...")
#
# TODO/IDA: we need better early detection for if IDA is fully ready,
# this isn't effective and this func theme func can crash IDA if
# called too early (eg, during db load...).
#
# this isn't a problem now... but I don't want us to be at risk of
# hard crashing people's IDA in the future should we change something.
#
imagebase = idaapi.get_imagebase()
#if imagebase == idaapi.BADADDR:
# logger.debug(" - No imagebase...")
# return None
# create a temp file that we can write to
handle, path = tempfile.mkstemp()
os.close(handle)
# attempt to generate an 'html' dump of the first 0x20 bytes (instructions)
ida_fd = idaapi.fopenWT(path)
idaapi.gen_file(idaapi.OFILE_LST, ida_fd, imagebase, imagebase+0x20, idaapi.GENFLG_GENHTML)
# IDA 9.x SDK fix: removed `idaapi.eclose`, added `ida_fpro.qfclose`
if int(idaapi.get_kernel_version()[0]) >= 9:
import ida_fpro
ida_fpro.qfclose(ida_fd)
else:
idaapi.eclose(ida_fd)
# read the dumped text
with open(path, "r") as fd:
html = fd.read()
# delete the temp file from disk
try:
os.remove(path)
except OSError:
pass
# attempt to parse the user's disassembly background color from the html (7.0?)
bg_color_text = get_string_between(html, '<body bgcolor="', '">')
if bg_color_text:
logger.debug(" - Extracted bgcolor '%s' from regex!" % bg_color_text)
return QtGui.QColor(bg_color_text)
#
# sometimes the above one isn't present... so try this one (7.1 - 7.4 maybe?)
#
# TODO: IDA 7.5 says c1 is /* line-fg-default */ ... but it's possible c1
# had the bg color of the line in other builds of 7.x? I'm not sure but
# this should be double checked at some point and can maybe just be removed
# in favor of c41 (line-bg-default) as that's what we really want
#
bg_color_text = get_string_between(html, '.c1 \{ background-color: ', ';')
if bg_color_text:
logger.debug(" - Extracted background-color '%s' from line-fg-default!" % bg_color_text)
return QtGui.QColor(bg_color_text)
# -- IDA 7.5 says c41 is /* line-bg-default */, a.k.a the bg color for disassembly text
bg_color_text = get_string_between(html, '.c41 \{ background-color: ', ';')
if bg_color_text:
logger.debug(" - Extracted background-color '%s' from line-bg-default!" % bg_color_text)
return QtGui.QColor(bg_color_text)
logger.debug(" - HTML color regex failed...")
logger.debug(html)
return None
def _get_ida_bg_color_from_view(self):
"""
Get the background color of the IDA disassembly views via widget inspection.
"""
logger.debug("Attempting to get IDA disassembly background color from view...")
names = ["Enums", "Structures"]
names += ["Hex View-%u" % i for i in range(5)]
names += ["IDA View-%c" % chr(ord('A') + i) for i in range(5)]
# find a form (eg, IDA view) to analyze colors from
for window_name in names:
twidget = idaapi.find_widget(window_name)
if twidget:
break
else:
logger.debug(" - Failed to find donor view...")
return None
# touch the target form so we know it is populated
self._touch_ida_window(twidget)
# IDA 9.2 SDK fix: migrate from PyQt5 to PySide6
# https://docs.hex-rays.com/user-guide/plugins/migrating-pyqt5-code-to-pyside6
if USING_PYSIDE6:
from shiboken6 import wrapInstance
else:
import sip
def wrapInstance(ptr, base=None):
return sip.wrapinstance(int(ptr), base)
widget = wrapInstance(int(twidget), QtWidgets.QWidget)
pixmap = widget.grab(QtCore.QRect(0, 10, widget.width(), 1))
# convert the raw pixmap into an image (easier to interface with)
image = QtGui.QImage(pixmap.toImage())
# return the predicted background color
return QtGui.QColor(predict_bg_color(image))
def _touch_ida_window(self, target):
"""
Touch a window/widget/form to ensure it gets drawn by IDA.
XXX/HACK:
We need to ensure that widget we will analyze actually gets drawn
so that there are colors for us to steal.
To do this, we switch to it, and switch back. I tried a few different
ways to trigger this from Qt, but could only trigger the full
painting by going through the IDA routines.
"""
# get the currently active widget/form title (the form itself seems transient...)
twidget = idaapi.get_current_widget()
title = idaapi.get_widget_title(twidget)
# touch the target window by switching to it
idaapi.activate_widget(target, True)
flush_qt_events()
# locate our previous selection
previous_twidget = idaapi.find_widget(title)
# return us to our previous selection
idaapi.activate_widget(previous_twidget, True)
flush_qt_events()
#------------------------------------------------------------------------------
# Disassembler Context API (database-specific)
#------------------------------------------------------------------------------
class IDAContextAPI(DisassemblerContextAPI):
def __init__(self, dctx):
super(IDAContextAPI, self).__init__(dctx)
@property
def busy(self):
return not(idaapi.auto_is_ok())
#--------------------------------------------------------------------------
# API Shims
#--------------------------------------------------------------------------
@IDACoreAPI.execute_read
def get_current_address(self):
return idaapi.get_screen_ea()
def get_database_directory(self):
return idautils.GetIdbDir()
def get_function_addresses(self):
return list(idautils.Functions())
def get_function_name_at(self, address):
return idaapi.get_short_name(address)
def get_function_raw_name_at(self, function_address):
return idaapi.get_name(function_address)
def get_imagebase(self):
return idaapi.get_imagebase()
def get_root_filename(self):
return idaapi.get_root_filename()
def navigate(self, address):
return idaapi.jumpto(address)
def navigate_to_function(self, function_address, address):
return self.navigate(address)
def set_function_name_at(self, function_address, new_name):
idaapi.set_name(function_address, new_name, idaapi.SN_NOWARN)
#--------------------------------------------------------------------------
# Hooks API
#--------------------------------------------------------------------------
def create_rename_hooks(self):
return RenameHooks()
#------------------------------------------------------------------------------
# Function Prefix API
#------------------------------------------------------------------------------
PREFIX_SEPARATOR = "%"
#------------------------------------------------------------------------------
# Hooking
#------------------------------------------------------------------------------
class RenameHooks(idaapi.IDB_Hooks):
def renamed(self, address, new_name, local_name):
"""
Capture all IDA rename events.
"""
# we should never care about local renames (eg, loc_40804b), ignore
if local_name or new_name.startswith("loc_"):
return 0
rendered_name = idaapi.get_short_name(address)
# call the 'renamed' callback, that will get hooked by a listener
self.name_changed(address, rendered_name)
# must return 0 to keep IDA happy...
return 0
def name_changed(self, address, new_name):
"""
A placeholder callback, which will get hooked / replaced once live.
"""
pass
#------------------------------------------------------------------------------
# HexRays Util
#------------------------------------------------------------------------------
def hexrays_available():
"""
Return True if an IDA decompiler is loaded and available for use.
"""
try:
import ida_hexrays
return ida_hexrays.init_hexrays_plugin()
except ImportError:
return False
def map_line2citem(decompilation_text):
"""
Map decompilation line numbers to citems.
This function allows us to build a relationship between citems in the
ctree and specific lines in the hexrays decompilation text.
Output:
+- line2citem:
| a map keyed with line numbers, holding sets of citem indexes
|
| eg: { int(line_number): sets(citem_indexes), ... }
'
"""
line2citem = {}
#
# it turns out that citem indexes are actually stored inline with the
# decompilation text output, hidden behind COLOR_ADDR tokens.
#
# here we pass each line of raw decompilation text to our crappy lexer,
# extracting any COLOR_ADDR tokens as citem indexes
#
for line_number in xrange(decompilation_text.size()):
line_text = decompilation_text[line_number].line
line2citem[line_number] = lex_citem_indexes(line_text)
#logger.debug("Line Text: %s" % binascii.hexlify(line_text))
return line2citem
def map_line2node(cfunc, metadata, line2citem):
"""
Map decompilation line numbers to node (basic blocks) addresses.
This function allows us to build a relationship between graph nodes
(basic blocks) and specific lines in the hexrays decompilation text.
Output:
+- line2node:
| a map keyed with line numbers, holding sets of node addresses
|
| eg: { int(line_number): set(nodes), ... }
'
"""
line2node = {}
treeitems = cfunc.treeitems
function_address = cfunc.entry_ea
#
# prior to this function, a line2citem map was built to tell us which
# citems reside on any given line of text in the decompilation output.
#
# now, we walk through this line2citem map one 'line_number' at a time in
# an effort to resolve the set of graph nodes associated with its citems.
#
for line_number, citem_indexes in iteritems(line2citem):
nodes = set()
#
# we are at the level of a single line (line_number). we now consume
# its set of citems (citem_indexes) and attempt to identify explicit
# graph nodes they claim to be sourced from (by their reported EA)
#
for index in citem_indexes:
# get the code address of the given citem
try:
item = treeitems[index]
address = item.ea
# apparently this is a thing on IDA 6.95
except IndexError as e:
continue
# find the graph node (eg, basic block) that generated this citem
node = metadata.get_node(address)
# address not mapped to a node... weird. continue to the next citem
if not node:
#logger.warning("Failed to map node to basic block")
continue
#
# we made it this far, so we must have found a node that contains
# this citem. save the computed node_id to the list of known
# nodes we have associated with this line of text
#
nodes.add(node.address)
#
# finally, save the completed list of node ids as identified for this
# line of decompilation text to the line2node map that we are building
#
line2node[line_number] = nodes
# all done, return the computed map
return line2node
def lex_citem_indexes(line):
"""
Lex all ctree item indexes from a given line of text.
The HexRays decompiler output contains invisible text tokens that can
be used to attribute spans of text to the ctree items that produced them.
This function will simply scrape and return a list of all the these
tokens (COLOR_ADDR) which contain item indexes into the ctree.
"""
i = 0
indexes = []
line_length = len(line)
# lex COLOR_ADDR tokens from the line of text
while i < line_length:
# does this character mark the start of a new COLOR_* token?
if line[i] == idaapi.COLOR_ON:
# yes, so move past the COLOR_ON byte
i += 1
# is this sequence for a COLOR_ADDR?
if ord(line[i]) == idaapi.COLOR_ADDR:
# yes, so move past the COLOR_ADDR byte
i += 1
#
# A COLOR_ADDR token is followed by either 8, or 16 characters
# (a hex encoded number) that represents an address/pointer.
# in this context, it is actually the index number of a citem
#
citem_index = int(line[i:i+idaapi.COLOR_ADDR_SIZE], 16)
i += idaapi.COLOR_ADDR_SIZE
# save the extracted citem index
indexes.append(citem_index)
# skip to the next iteration as i has moved
continue
# nothing we care about happened, keep lexing forward
i += 1
# return all the citem indexes extracted from this line of text
return indexes
@@ -2,11 +2,8 @@ import os
import sys
import logging
import idaapi
#
# shamelessly ripped from Solidarity
#
from .misc import makedirs
from .disassembler import disassembler
#------------------------------------------------------------------------------
# Log / Print helpers
@@ -14,15 +11,15 @@ import idaapi
def lmsg(message):
"""
Print a message to the IDA output window, prefixed with [Lighthouse]
Print a message to the disassembler output window, prefixed with [Lighthouse]
"""
# prefix the message
prefix_message = "[Lighthouse] %s" % message
# only print to IDA if the output window is alive
if idaapi.is_msg_inited():
print prefix_message
# only print to disassembler if its output window is alive
if disassembler.is_msg_inited():
disassembler.message(prefix_message)
else:
logger.info(message)
@@ -30,7 +27,11 @@ def get_log_dir():
"""
Return the Lighthouse log directory.
"""
return os.path.join(idaapi.get_user_idadir(), "lighthouse_logs")
log_directory = os.path.join(
disassembler.get_disassembler_user_directory(),
"lighthouse_logs"
)
return log_directory
def logging_started():
"""
@@ -54,7 +55,8 @@ class LoggerProxy(object):
def write(self, buf):
for line in buf.rstrip().splitlines():
self._logger.log(self._log_level, line.rstrip())
self._stream.write(buf)
if self._stream:
self._stream.write(buf)
def flush(self):
pass
@@ -66,7 +68,7 @@ class LoggerProxy(object):
# Initialize Logging
#------------------------------------------------------------------------------
MAX_LOGS = 5
MAX_LOGS = 10
def cleanup_log_directory(log_directory):
"""
Retain only the last 15 logs.
@@ -79,8 +81,8 @@ def cleanup_log_directory(log_directory):
if os.path.isfile(filepath):
filetimes[os.path.getmtime(filepath)] = filepath
# get the filetimes and check if there's enough enough to warrant cleanup
times = filetimes.keys()
# get the filetimes and check if there's enough to warrant cleanup
times = list(filetimes.keys())
if len(times) < MAX_LOGS:
return
@@ -101,10 +103,25 @@ def cleanup_log_directory(log_directory):
def start_logging():
global logger
# create the Lighthouse logger
logger = logging.getLogger("Lighthouse")
#
# only enable logging if the LIGHTHOUSE_LOGGING environment variable is
# present. we simply return a stub logger to sinkhole messages.
#
if os.getenv("LIGHTHOUSE_LOGGING") == None:
logger.disabled = True
return logger
# create a directory for lighthouse logs if it does not exist
log_dir = get_log_dir()
if not os.path.exists(log_dir):
os.makedirs(log_dir)
try:
makedirs(log_dir)
except Exception as e:
logger.disabled = True
return logger
# construct the full log path
log_path = os.path.join(log_dir, "lighthouse.%s.log" % os.getpid())
@@ -112,14 +129,11 @@ def start_logging():
# config the logger
logging.basicConfig(
filename=log_path,
format='%(asctime)s | %(name)20s | %(levelname)7s: %(message)s',
format='%(asctime)s | %(name)28s | %(levelname)7s: %(message)s',
datefmt='%m-%d-%Y %H:%M:%S',
level=logging.DEBUG
)
# create the Lighthouse logger
logger = logging.getLogger("Lighthouse")
# proxy STDOUT/STDERR to the log files too
stdout_logger = logging.getLogger('Lighthouse.STDOUT')
stderr_logger = logging.getLogger('Lighthouse.STDERR')
+220
View File
@@ -0,0 +1,220 @@
import os
import re
import errno
import struct
import weakref
import datetime
import threading
import collections
from .python import *
BADADDR = 0xFFFFFFFFFFFFFFFF
#------------------------------------------------------------------------------
# Plugin Util
#------------------------------------------------------------------------------
PLUGIN_PATH = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
def plugin_resource(resource_name):
"""
Return the full path for a given plugin resource file.
"""
return os.path.join(
PLUGIN_PATH,
"ui",
"resources",
resource_name
)
#------------------------------------------------------------------------------
# Thread Util
#------------------------------------------------------------------------------
def is_mainthread():
"""
Return a bool that indicates if this is the main application thread.
"""
return isinstance(threading.current_thread(), threading._MainThread)
def mainthread(f):
"""
A debug decorator to ensure that a function is always called from the main thread.
"""
def wrapper(*args, **kwargs):
assert is_mainthread()
return f(*args, **kwargs)
return wrapper
def not_mainthread(f):
"""
A debug decorator to ensure that a function is never called from the main thread.
"""
def wrapper(*args, **kwargs):
assert not is_mainthread()
return f(*args, **kwargs)
return wrapper
#------------------------------------------------------------------------------
# Theme Util
#------------------------------------------------------------------------------
def swap_rgb(i):
"""
Swap RRGGBB (integer) to BBGGRR.
"""
return struct.unpack("<I", struct.pack(">I", i))[0] >> 8
def test_color_brightness(color):
"""
Test the brightness of a color.
"""
if color.lightness() > 255.0/2:
return "light"
else:
return "dark"
#------------------------------------------------------------------------------
# Python Util
#------------------------------------------------------------------------------
def makedirs(path, exists_ok=True):
"""
Make a fully qualified path.
"""
try:
os.makedirs(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise e
if not exists_ok:
raise e
def chunks(l, n):
"""
Yield successive n-sized chunks from a list (l).
From http://stackoverflow.com/a/312464
"""
for i in xrange(0, len(l), n):
yield l[i:i + n]
def hex_list(items):
"""
Return a string of a python-like list, with hex numbers.
[0, 5420, 1942512] --> '[0x0, 0x152C, 0x1DA30]'
"""
return '[{}]'.format(', '.join('0x%X' % x for x in items))
def human_timestamp(timestamp):
"""
Return a human readable timestamp for a given epoch.
"""
dt = datetime.datetime.fromtimestamp(timestamp)
return dt.strftime("%b %d %Y %H:%M:%S")
def get_string_between(text, before, after):
"""
Get the string between two strings.
"""
pattern = "%s(.*)%s" % (before, after)
result = re.search(pattern, text)
if not result:
return None
return result.group(1)
#------------------------------------------------------------------------------
# Python Callback / Signals
#------------------------------------------------------------------------------
def register_callback(callback_list, callback):
"""
Register a callable function to the given callback_list.
Adapted from http://stackoverflow.com/a/21941670
"""
# create a weakref callback to an object method
try:
callback_ref = weakref.ref(callback.__func__), weakref.ref(callback.__self__)
# create a wweakref callback to a stand alone function
except AttributeError:
callback_ref = weakref.ref(callback), None
# 'register' the callback
callback_list.append(callback_ref)
return callback_ref
def notify_callback(callback_list, *args):
"""
Notify the given list of registered callbacks of an event.
The given list (callback_list) is a list of weakref'd callables
registered through the register_callback() function. To notify the
callbacks of an event, this function will simply loop through the list
and call them.
This routine self-heals by removing dead callbacks for deleted objects as
it encounters them.
Adapted from http://stackoverflow.com/a/21941670
"""
cleanup = []
#
# loop through all the registered callbacks in the given callback_list,
# notifying active callbacks, and removing dead ones.
#
for callback_ref in callback_list:
callback, obj_ref = callback_ref[0](), callback_ref[1]
#
# if the callback is an instance method, deference the instance
# (an object) first to check that it is still alive
#
if obj_ref:
obj = obj_ref()
# if the object instance is gone, mark this callback for cleanup
if obj is None:
cleanup.append(callback_ref)
continue
# call the object instance callback
try:
callback(obj, *args)
# assume a Qt cleanup/deletion occurred
except RuntimeError as e:
cleanup.append(callback_ref)
continue
# if the callback is a static method...
else:
# if the static method is deleted, mark this callback for cleanup
if callback is None:
cleanup.append(callback_ref)
continue
# call the static callback
callback(*args)
# remove the deleted callbacks
for callback_ref in cleanup:
callback_list.remove(callback_ref)
def unregister_callback(callback_list, callback_ref):
"""
Remove a previously-registered callback reference.
"""
try:
callback_list.remove(callback_ref)
except (ValueError, AttributeError):
pass
+69
View File
@@ -0,0 +1,69 @@
import sys
import operator
#------------------------------------------------------------------------------
# Python 2/3 Compatibilty Shims
#------------------------------------------------------------------------------
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
#
# xrange shim
#
if PY3:
xrange = range # is this bad lol
#
# Queue --> queue shim
#
try:
import Queue as queue
except:
import queue
#
# iter* shims by Benjamin Peterson, from https://github.com/benjaminp/six
#
if PY3:
def iterkeys(d, **kw):
return iter(d.keys(**kw))
def itervalues(d, **kw):
return iter(d.values(**kw))
def iteritems(d, **kw):
return iter(d.items(**kw))
def iterlists(d, **kw):
return iter(d.lists(**kw))
viewkeys = operator.methodcaller("keys")
viewvalues = operator.methodcaller("values")
viewitems = operator.methodcaller("items")
else:
def iterkeys(d, **kw):
return d.iterkeys(**kw)
def itervalues(d, **kw):
return d.itervalues(**kw)
def iteritems(d, **kw):
return d.iteritems(**kw)
def iterlists(d, **kw):
return d.iterlists(**kw)
viewkeys = operator.methodcaller("viewkeys")
viewvalues = operator.methodcaller("viewvalues")
viewitems = operator.methodcaller("viewitems")
+6
View File
@@ -0,0 +1,6 @@
from .shim import *
if QT_AVAILABLE:
from .util import *
from .waitbox import WaitBox
+118
View File
@@ -0,0 +1,118 @@
#
# this global is used to indicate whether Qt bindings for python are present
# and available for use by Lighthouse.
#
QT_AVAILABLE = False
#------------------------------------------------------------------------------
# PyQt5 <--> PySide2 Compatibility
#------------------------------------------------------------------------------
#
# we use this file to shim/re-alias a few Qt API's to ensure compatibility
# between the popular Qt frameworks. these shims serve to reduce the number
# of compatibility checks in the plugin code that consumes them.
#
# this file was critical for retaining compatibility with Qt4 frameworks
# used by IDA 6.8/6.95, but it less important now. support for Qt 4 and
# older versions of IDA (< 7.0) were deprecated in Lighthouse v0.9.0
#
USING_PYQT5 = False
USING_PYSIDE2 = False
USING_PYSIDE6 = False
#
# TODO/QT: This file is getting pretty gross. this whole shim system
# should probably get refactored as I really don't want disassembler
# specific dependencies in here...
#
# IDA 9.2 SDK fix: migrate from PyQt5 to PySide6
# https://docs.hex-rays.com/user-guide/plugins/migrating-pyqt5-code-to-pyside6
try:
import idaapi
ver_major, ver_minor = map(int, idaapi.get_kernel_version().split("."))
USING_NEW_IDA = ver_major == 9 and ver_minor >= 2
USING_OLD_IDA = not(USING_NEW_IDA)
except ImportError:
USING_NEW_IDA = False
USING_OLD_IDA = False
try:
import binaryninjaui
USING_NEW_BINJA = "qt_major_version" in binaryninjaui.__dict__ and binaryninjaui.qt_major_version == 6
USING_OLD_BINJA = not(USING_NEW_BINJA)
except ImportError:
USING_NEW_BINJA = False
USING_OLD_BINJA = False
#------------------------------------------------------------------------------
# PyQt5 Compatibility
#------------------------------------------------------------------------------
# attempt to load PyQt5 (IDA from 7.0 to 9.1)
if USING_OLD_IDA:
try:
import PyQt5.QtGui as QtGui
import PyQt5.QtCore as QtCore
import PyQt5.QtWidgets as QtWidgets
# importing went okay, PyQt5 must be available for use
QT_AVAILABLE = True
USING_PYQT5 = True
# import failed, PyQt5 is not available
except ImportError:
pass
#------------------------------------------------------------------------------
# PySide2 Compatibility
#------------------------------------------------------------------------------
# if PyQt5 did not import, try to load PySide2 (Old Binary Ninja / Cutter)
if not QT_AVAILABLE and USING_OLD_BINJA:
try:
import PySide2.QtGui as QtGui
import PySide2.QtCore as QtCore
import PySide2.QtWidgets as QtWidgets
# alias for less PySide2 <--> PyQt5 shimming
QtCore.pyqtSignal = QtCore.Signal
QtCore.pyqtSlot = QtCore.Slot
# importing went okay, PySide must be available for use
QT_AVAILABLE = True
USING_PYSIDE2 = True
# import failed. No Qt / UI bindings available...
except ImportError:
pass
#------------------------------------------------------------------------------
# PySide6 Compatibility
#------------------------------------------------------------------------------
# If all else fails, try to load PySide6 (New Binary Ninja and IDA)
if not QT_AVAILABLE and (USING_NEW_BINJA or USING_NEW_IDA):
try:
import PySide6.QtGui as QtGui
import PySide6.QtCore as QtCore
import PySide6.QtWidgets as QtWidgets
# alias for less PySide6 <--> PyQt5 shimming
QtCore.pyqtSignal = QtCore.Signal
QtCore.pyqtSlot = QtCore.Slot
QtWidgets.QAction = QtGui.QAction
# importing went okay, PySide must be available for use
QT_AVAILABLE = True
USING_PYSIDE6 = True
# import failed. No Qt / UI bindings available...
except ImportError:
pass
+368
View File
@@ -0,0 +1,368 @@
import sys
import time
import logging
import threading
from .shim import *
from ..misc import is_mainthread
from ..python import *
from ..disassembler import disassembler
logger = logging.getLogger("Lighthouse.Qt.Util")
#------------------------------------------------------------------------------
# Qt Fonts
#------------------------------------------------------------------------------
def MonospaceFont():
"""
Convenience alias for creating a monospace Qt font object.
"""
font = QtGui.QFont("Courier New")
font.setStyleHint(QtGui.QFont.Monospace)
return font
#------------------------------------------------------------------------------
# Qt Util
#------------------------------------------------------------------------------
def color_text(text, color):
"""
Return a colorized (HTML) version of the given string.
"""
return "<font color=\"%s\">%s</font>" % (color.name(), text)
def copy_to_clipboard(data):
"""
Copy the given data (a string) to the system clipboard.
"""
cb = QtWidgets.QApplication.clipboard()
cb.clear(mode=QtGui.QClipboard.Mode.Clipboard)
cb.setText(data, mode=QtGui.QClipboard.Mode.Clipboard)
def flush_qt_events():
"""
Flush the Qt event pipeline.
"""
app = QtCore.QCoreApplication.instance()
app.processEvents()
def get_qt_icon(name):
"""
Get a standard Qt icon by name.
"""
icon_type = getattr(QtWidgets.QStyle, name)
return QtWidgets.QApplication.style().standardIcon(icon_type)
def get_default_font_size():
"""
Get the default font size for this QApplication.
"""
return QtGui.QFont().pointSizeF()
def get_dpi_scale():
"""
Get a DPI-afflicted value useful for consistent UI scaling.
"""
font = MonospaceFont()
font.setPointSize(normalize_to_dpi(120))
fm = QtGui.QFontMetricsF(font)
# xHeight is expected to be 40.0 at normal DPI
return fm.height() / 173.0
def compute_color_on_gradient(percent, color1, color2):
"""
Compute the color specified by a percent between two colors.
"""
r1, g1, b1, _ = color1.getRgb()
r2, g2, b2, _ = color2.getRgb()
# compute the new color across the gradient of color1 -> color 2
r = r1 + int(percent * (r2 - r1))
g = g1 + int(percent * (g2 - g1))
b = b1 + int(percent * (b2 - b1))
# return the new color
return QtGui.QColor(r,g,b)
def move_mouse_event(mouse_event, position):
"""
Move the given mouse event to a different position.
"""
new_event = QtGui.QMouseEvent(
mouse_event.type(),
position,
mouse_event.button(),
mouse_event.buttons(),
mouse_event.modifiers()
)
return new_event
def normalize_to_dpi(font_size):
"""
Normalize the given font size based on the system DPI.
"""
if sys.platform == "darwin": # macos is lame
return font_size + 2
return font_size
def prompt_string(label, title, default=""):
"""
Prompt the user with a dialog to enter a string.
This does not block the IDA main thread (unlike idaapi.askstr)
"""
dpi_scale = get_dpi_scale()
dlg = QtWidgets.QInputDialog(None)
dlg.setWindowFlags(dlg.windowFlags() & ~QtCore.Qt.WindowContextHelpButtonHint)
dlg.setInputMode(QtWidgets.QInputDialog.TextInput)
dlg.setLabelText(label)
dlg.setWindowTitle(title)
dlg.setTextValue(default)
dlg.resize(
int(dpi_scale*400),
int(dpi_scale*50)
)
dlg.setModal(True)
dlg.show()
dlg.setFocus(QtCore.Qt.PopupFocusReason)
ok = dlg.exec_()
text = str(dlg.textValue())
return (ok, text)
def predict_bg_color(image):
"""
Predict the 'background color' of a given image.
This function takes an image, and analyzes its first row of pixels. It
will return the color that it believes to be the 'background color' based
on the longest sequence of identical pixels.
"""
assert image.width() and image.height()
# the details for the longest known color streak will be saved in these
longest = 1
speculative_bg = image.pixel(0, 0)
# this will be the computed length of the current color streak
sequence = 1
# find the longest streak of color in a single pixel slice
for x in xrange(1, image.width()):
# the color of this pixel matches the last pixel, extend the streak count
if image.pixel(x, 0) == image.pixel(x-1,0):
sequence += 1
#
# this catches the case where the longest color streak is in fact
# the last one. this ensures the streak color will get saved.
#
if x != image.width():
continue
# color change, determine if this was the longest continuous color streak
if sequence > longest:
# save the last pixel as the longest sequence / most likely BG color
longest = sequence
speculative_bg = image.pixel(x-1, 0)
# reset the sequence counter
sequence = 1
# return the color we speculate to be the background color
return speculative_bg
def remap_key_event(event, new_key):
"""
Change a given KeyPress QEvent to a different key.
"""
return QtGui.QKeyEvent(
QtCore.QEvent.KeyPress,
new_key,
event.modifiers(),
event.text(),
event.isAutoRepeat(),
event.count()
)
def singleshot(ms, function=None):
"""
A Qt Singleshot timer that can be stopped.
"""
timer = QtCore.QTimer()
timer.setInterval(ms)
timer.setSingleShot(True)
timer.timeout.connect(function)
return timer
#------------------------------------------------------------------------------
# Async Util
#------------------------------------------------------------------------------
def await_future(future):
"""
Wait for a queue (future) message without blocking the main (Qt) thread.
This is effectively a technique I use to get around completely blocking
IDA's mainthread while waiting for a threaded result that may need to make
use of the execute_sync operators.
Waiting for a 'future' thread result to come through via this function
lets other execute_sync actions to slip through (at least Read, Fast).
"""
interval = 0.02 # the interval which we wait for a response
# run until the message arrives through the future (a queue)
while True:
# block for a brief period to see if the future completes
try:
return future.get(timeout=interval)
#
# the future timed out, so perhaps it is blocked on a request
# to the mainthread. flush the requests now and try again
#
except queue.Empty as e:
pass
logger.debug("Awaiting future...")
#
# if we are executing (well, blocking) as the main thread, we need
# to flush the event loop so IDA does not hang
#
if QT_AVAILABLE and is_mainthread():
flush_qt_events()
def await_lock(lock):
"""
Wait for a lock without blocking the main (Qt) thread.
See await_future() for more details.
"""
elapsed = 0 # total time elapsed waiting for the lock
interval = 0.02 # the interval (in seconds) between acquire attempts
timeout = 60.0 # the total time allotted to acquiring the lock
end_time = time.time() + timeout
# wait until the lock is available
while time.time() < end_time:
#
# attempt to acquire the given lock without blocking (via 'False').
# if we successfully acquire the lock, then we can return (success)
#
if lock.acquire(False):
logger.debug("Acquired lock!")
return
#
# the lock is not available yet. we need to sleep so we don't choke
# the cpu, and try to acquire the lock again next time through...
#
logger.debug("Awaiting lock...")
time.sleep(interval)
#
# if we are executing (well, blocking) as the main thread, we need
# to flush the event loop so IDA does not hang
#
if QT_AVAILABLE and is_mainthread():
flush_qt_events()
#
# we spent 60 seconds trying to acquire the lock, but never got it...
# to avoid hanging IDA indefinitely (or worse), we abort via signal
#
raise RuntimeError("Failed to acquire lock after %f seconds!" % timeout)
class QMainthread(QtCore.QObject):
"""
A Qt object whose sole purpose is to execute code on the mainthread.
"""
toMainthread = QtCore.pyqtSignal(object)
toMainthreadFast = QtCore.pyqtSignal(object)
def __init__(self):
super(QMainthread, self).__init__()
# helpers used to ensure thread safety
self._lock = threading.Lock()
self._fast_refs = []
self._result_queue = queue.Queue()
# signals used to communicate with the Qt mainthread
self.toMainthread.connect(self._execute_with_result)
self.toMainthreadFast.connect(self._execute_fast)
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
def execute(self, function):
"""
Execute a function on the mainthread and wait for its return value.
This function is safe to call from any thread, at any time.
"""
# if we are already on the mainthread, execute the callable inline
if is_mainthread():
return function()
# execute the callable on the mainthread and wait for it to complete
with self._lock:
self.toMainthread.emit(function)
result = self._result_queue.get()
# return the result of executing on the mainthread
return result
def execute_fast(self, function):
"""
Execute a function on the mainthread without waiting for completion.
"""
#
# append the given function to a reference list.
#
# I do this because I am not confident python / qt will guarantee the
# lifetime of the callable (function) as we cross threads and the
# callee scope/callstack dissolves away from beneath us
#
# this callable will be deleted from the ref list in _excute_fast()
#
self._fast_refs.append(function)
# signal to the mainthread that a new function is ready to execute
self.toMainthreadFast.emit(function)
#--------------------------------------------------------------------------
# Internal
#--------------------------------------------------------------------------
def _execute_with_result(self, function):
try:
self._result_queue.put(function())
except Exception as e:
logger.exception("QMainthread Exception")
self._result_queue.put(None)
def _execute_fast(self, function):
function()
self._fast_refs.remove(function)
qt_mainthread = QMainthread()
+102
View File
@@ -0,0 +1,102 @@
from .shim import *
from .util import get_dpi_scale
import logging
logger = logging.getLogger("Lighthouse.Qt.WaitBox")
#--------------------------------------------------------------------------
# Qt WaitBox
#--------------------------------------------------------------------------
class WaitBox(QtWidgets.QDialog):
"""
A Generic Qt WaitBox Dialog.
"""
def __init__(self, text, title="Please wait...", abort=None):
super(WaitBox, self).__init__()
# dialog text & window title
self._text = text
self._title = title
# abort routine (optional)
self._abort = abort
# initialize the dialog UI
self._ui_init()
def set_text(self, text):
"""
Change the waitbox text.
"""
self._text = text
self._text_label.setText(text)
qta = QtCore.QCoreApplication.instance()
qta.processEvents()
def show(self, modal=True):
self.setModal(modal)
result = super(WaitBox, self).show()
qta = QtCore.QCoreApplication.instance()
qta.processEvents()
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init(self):
"""
Initialize UI elements.
"""
self.setWindowFlags(
self.windowFlags() & ~QtCore.Qt.WindowContextHelpButtonHint
)
self.setWindowFlags(
self.windowFlags() | QtCore.Qt.MSWindowsFixedSizeDialogHint
)
self.setWindowFlags(
self.windowFlags() & ~QtCore.Qt.WindowCloseButtonHint
)
# configure the main widget / form
self.setSizeGripEnabled(False)
self.setModal(True)
self._dpi_scale = get_dpi_scale()*5
# initialize abort button
self._abort_button = QtWidgets.QPushButton("Cancel")
# layout the populated UI just before showing it
self._ui_layout()
def _ui_layout(self):
"""
Layout the major UI elements of the widget.
"""
self.setWindowTitle(self._title)
self._text_label = QtWidgets.QLabel(self._text)
self._text_label.setAlignment(QtCore.Qt.AlignHCenter)
# vertical layout (whole widget)
v_layout = QtWidgets.QVBoxLayout()
v_layout.setAlignment(QtCore.Qt.AlignCenter)
v_layout.addWidget(self._text_label)
if self._abort:
self._abort_button.clicked.connect(self._abort)
v_layout.addWidget(self._abort_button)
v_layout.setSpacing(int(self._dpi_scale*3))
v_layout.setContentsMargins(
int(self._dpi_scale*5),
int(self._dpi_scale),
int(self._dpi_scale*5),
int(self._dpi_scale)
)
# scale widget dimensions based on DPI
height = int(self._dpi_scale * 15)
self.setMinimumHeight(height)
# compute the dialog layout
self.setLayout(v_layout)
+63
View File
@@ -0,0 +1,63 @@
import re
import json
import logging
import threading
try:
from urllib2 import urlopen # Py2
except ImportError:
from urllib.request import urlopen # Py3
logger = logging.getLogger("Lighthouse.Util.Update")
#------------------------------------------------------------------------------
# Update Checking
#------------------------------------------------------------------------------
UPDATE_URL = "https://api.github.com/repos/gaasedelen/lighthouse/releases/latest"
def check_for_update(current_version, callback):
"""
Perform a plugin update check.
"""
update_thread = threading.Thread(
target=async_update_check,
args=(current_version, callback,),
name="Lighthouse UpdateChecker"
)
update_thread.start()
def async_update_check(current_version, callback):
"""
An async worker thread to check for an plugin update.
"""
logger.debug("Checking for update...")
current_version = "v" + current_version
try:
response = urlopen(UPDATE_URL, timeout=5.0)
html = response.read()
info = json.loads(html)
remote_version = info["tag_name"]
except Exception:
logger.debug(" - Failed to reach GitHub for update check...")
return
# convert version #'s to integer for easy compare...
version_remote = int(''.join(re.findall('\d+', remote_version)))
version_local = int(''.join(re.findall('\d+', current_version)))
# no updates available...
logger.debug(" - Local: '%s' vs Remote: '%s'" % (current_version, remote_version))
if version_local >= version_remote:
logger.debug(" - No update needed...")
return
# notify the user if an update is available
update_message = "An update is available for Lighthouse!\n\n" \
" - Latest Version: %s\n" % (remote_version) + \
" - Current Version: %s\n\n" % (current_version) + \
"Please go download the update from GitHub."
callback(update_message)
+27
View File
@@ -0,0 +1,27 @@
from lighthouse.util.log import logging_started, start_logging
from lighthouse.util.disassembler import disassembler
if not logging_started():
logger = start_logging()
#------------------------------------------------------------------------------
# Disassembler Agnonstic Plugin Loader
#------------------------------------------------------------------------------
logger.debug("Resolving disassembler platform for plugin...")
if disassembler.headless:
logger.info("Disassembler '%s' is running headlessly" % disassembler.NAME)
logger.info(" - Lighthouse is not supported in headless modes (yet!)")
elif disassembler.NAME == "IDA":
logger.info("Selecting IDA loader...")
from lighthouse.integration.ida_loader import *
elif disassembler.NAME == "BINJA":
logger.info("Selecting Binary Ninja loader...")
from lighthouse.integration.binja_loader import *
else:
raise NotImplementedError("DISASSEMBLER-SPECIFIC SHIM MISSING")
Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

Some files were not shown because too many files have changed in this diff Show More