Compare commits

...

339 Commits

Author SHA1 Message Date
gaasedelen 0350b0c538 Follow-up to #157: unsubscribe torn-down coverage overview refresh callback 2026-02-13 19:39:44 -05:00
gaasedelen 88051b0652 Follow-up to #157: keep IDA8/9.1 sip path and bump dev version to 0.9.4 2026-02-13 18:56:11 -05:00
gaasedelen 1d7e4b321c Merge PR #157: IDA 9.2/9.3 compatibility fixes 2026-02-13 18:55:54 -05:00
raptor 602bb611a4 Fix some issues in IDA 9.1 2025-09-09 14:04:28 +02:00
raptor c5cc03d7dc Improve IDA 9.2 support without breaking previous supported versions 2025-09-09 13:34:47 +02:00
raptor 720b036a9d Start working on IDA 9.2 port 2025-09-09 11:43:22 +02:00
raptor b8ff268c86 IDA 9.x SDK fix 2025-08-28 17:47:43 +02:00
gaasedelen 88ceac7b19 tweak to make lighthouse copy *all* in-box theme files to the user theme directory... 2024-02-06 13:59:33 -05:00
gaasedelen 9a56463103 make update dialog look a little less janky 2024-02-06 13:30:54 -05:00
gaasedelen 562595be9b Merge branch 'develop' 2024-02-05 13:05:19 -05:00
gaasedelen eeb16a5f81 final fixes, ticking version numbers 2024-02-05 13:03:41 -05:00
gaasedelen 84eeb210c6 minor cleanup, finalizing for release 2024-02-02 19:28:55 -05:00
gaasedelen f944b62cc3 Update Intel pin build scripts for pin-3.30-98830 (MSVC) 2024-02-02 19:06:33 -05:00
gaasedelen 1b883377d6 switch database metadata instruction listing to a set, dramatically improving coverage loading perf in several cases, fixing #128 2024-02-02 17:31:56 -05:00
gaasedelen 146eb4fd2d fix bug where 'X' column in combobox was super wide/fat under PySide6 for some reason 2024-02-02 17:03:08 -05:00
gaasedelen de2704b680 Coverage Xref right click UI action rewrite for binja 2024-02-02 16:38:04 -05:00
gaasedelen 96df2c5a1f adds "long night" theme by ioncodes 2024-02-02 12:19:31 -05:00
gaasedelen 9c579c6e9d fix #131, hopefully improves dpi issues cross platform 2024-02-02 11:55:05 -05:00
gaasedelen ee7ac30f14 fix binary ninja "suspicious" warnings 2024-02-02 11:32:04 -05:00
gaasedelen 5f8fa02de2 fix function/symbol rename event hooks in binja 2024-02-02 11:32:04 -05:00
Thomas Dupuy 87defff5cd FIx small typo. 2024-02-01 20:31:24 -05:00
gaasedelen e1438159da fixes coverage xref compatability issues (timestamp crash) with binja / PySide6 per #123 2024-02-01 20:25:08 -05:00
gaasedelen 856768d129 fix clipboard compat issue exposed by PySide6 2024-02-01 20:08:26 -05:00
gaasedelen 7d67e41773 Fixes issue raised in #129 while retaining compatability and functionality across IDA / binja. 2024-02-01 19:54:30 -05:00
gaasedelen 8f3c23bbb6 maintain compatability with IDA 2024-02-01 18:50:58 -05:00
Alexandre Brenner 4992d4b731 Fix ValueError
Only tested on Windows
2024-02-01 18:50:58 -05:00
Alexandre Brenner 550c476e81 Fix TypeError
`TypeError: BasicBlock._create_instance() takes 2 positional arguments but 3 were given`
Only tested on Windows
2024-02-01 18:50:58 -05:00
mishap mishap bba8d91253 Allow partial module whitelist match 2024-01-18 20:49:25 -05:00
gaasedelen 0e0e08e564 normalize code/comments of drcov 3 fix 2024-01-18 19:47:48 -05:00
PavelBlinnikov 6912018b89 add support for drcov version 3 2024-01-18 19:47:48 -05:00
Moritz Schloegel f4642e8b4b fix typo in gradient 2022-02-17 14:29:27 -05:00
Moritz Schloegel 6e1dcb8d40 convert float to int as expected by Qt API functions (fixes #116) 2022-02-17 14:29:27 -05:00
Alexander Taylor 973b40e6d0 Fix for func possibly being None. 2022-01-10 11:36:32 -05:00
gaasedelen e3d636a3f4 adds support for tracking, viewing, and dumping orphan coverage (#108) 2021-10-05 22:48:37 -04:00
gaasedelen 7245a2d2c4 Merge branch 'develop' 2021-08-10 16:41:04 -04:00
gaasedelen 701952d83f add x86 supprt to tenet reader 2021-08-10 16:39:05 -04:00
gaasedelen 9ed0ccf528 updated readme, version number 2021-08-10 16:35:18 -04:00
gaasedelen ae1072a04b improves background color detection logic 2021-08-09 16:43:18 -04:00
gaasedelen 4c19fee630 bugfix when loading executables with unicode names on Python 2.7 2021-08-09 16:42:47 -04:00
gaasedelen d02fbf13d9 adds support for loading text-based Tenet traces 2021-08-06 18:45:51 -04:00
_yrp 19eddd6470 Lower block trace confidence (#105)
Depending on how blocks are traced, on some targets 0.9 is too high. Specifically, some emulator based tracers might consider the first instruction after a return to be a new "block" -- this will inflate the number of instructions with coverage path the threshold, even though the trace should still be considered block cov.
2021-08-06 18:41:41 -04:00
gaasedelen 09a95061ed reduce banner print 2021-08-03 16:47:34 -04:00
gaasedelen 94759790de font tweaks again 2021-08-03 16:46:56 -04:00
gaasedelen 374ca999f8 maybe fixes qt conflict issues? (issue #100) 2021-08-03 12:31:10 -04:00
gaasedelen b3c8683110 font fixes for macos...? for the 10th time...? 2021-07-30 09:03:40 -04:00
gaasedelen aa71810bf7 PySide6 & Python2 Compat :-X 2021-07-30 09:03:11 -04:00
gaasedelen b0f798a3d9 a bit of cleanup for binja plugin manager stuff 2021-07-29 10:42:45 -04:00
gaasedelen 85b08a9611 spindown lighthouse contexts in binja 2021-07-29 10:33:00 -04:00
gaasedelen f2031f921f a bit of housekeeping on drcov.py 2021-07-29 06:08:14 -04:00
Jordan 8e98a74eee Qt6 Updates for Binja + Binja plugin manager compatability (#106)
* initial pyside6 support and testing binary ninja plugin installation

* small tweak to fix plugin working with internal test builds
2021-07-29 05:47:36 -04:00
gaasedelen 85818f5b9b Merge pull request #98 from angelystor/master
Added DrCov module table version 5 support
2021-04-09 19:41:08 -04:00
Angelystor 8434c80e18 Added DrCov module table version 5 support which the latest DynamoRIO outputs 2021-04-08 18:55:09 +08:00
DrWhax a25e77764a Fixes for fixing the PIN build for all operating systems (#94)
* remove unneeded std namespace

* remove Wno-aligned-new

* This makes the build work again, tested on VC2015 community edition on
Windows Server 2019.

Credit for these changes go to: https://github.com/HongThatCong/Lighthouse-CodeCoverage-build
2020-12-26 21:57:41 -05:00
gaasedelen 91e427e7ec Merge branch 'develop' 2020-09-02 14:57:02 -04:00
gaasedelen 1bad74fc50 updates version string, readme tweaks 2020-09-02 14:55:44 -04:00
gaasedelen 68325095ee added the ability to suppress the module selector dialog 2020-08-31 21:58:02 -04:00
gaasedelen a50298a9d9 fix crash when ida is not shipped with ida_hexrays (issue #88) 2020-08-31 21:55:50 -04:00
gaasedelen 18ea9bbc69 disable logging by default, as things seem stable... 2020-07-28 22:30:08 -04:00
gaasedelen 5818ffe3fc trim filepaths off a coverage file's 'module names' (if present) 2020-07-28 22:29:40 -04:00
gaasedelen 5258d583c7 fix regression introduced in the shared note / painting fix from a commit or two prior 2020-07-28 22:09:52 -04:00
gaasedelen 5480afe646 renamed plugin dir to plugins for easier dev 2020-07-28 19:45:58 -04:00
gaasedelen 7a1abaeaa1 fixes a bug where the wrong nodes could get painted in IDA if a 'shared' node was executed 2020-07-04 22:12:55 -04:00
gaasedelen 1cf63ac925 Update metadata.py
save confirmed edges after computing cyclomatic complexity
2020-05-12 01:25:27 -04:00
gaasedelen 2bb4b9ebf3 vastly improves Lighthouse pintool performance for larger workloads 2020-05-04 05:35:02 -04:00
gaasedelen 710b13f38e Merge branch 'develop' 2020-04-23 06:00:37 -04:00
gaasedelen 69a595a875 updates readme and version # 2020-04-23 05:55:40 -04:00
gaasedelen 4661517cb3 hack to hide binja featuremap by default, since it will probably collide with the coverage overview dock position :-X 2020-04-23 05:03:09 -04:00
gaasedelen 3f0cfa856e bugfix where paint wouldn't fully refresh after changing themes 2020-04-23 04:20:38 -04:00
gaasedelen 8b9382d414 document the CoverageFile format a bit...... 2020-04-22 01:17:29 -04:00
gaasedelen a0c77f0f72 lower the font size a bit for macos 2020-04-21 23:13:44 -04:00
gaasedelen 22d48fa52b more robust path creation 2020-04-20 19:42:24 -04:00
gaasedelen 8cbfffe2e0 actually detect when IDA is is batch..... 2020-04-20 19:42:06 -04:00
gaasedelen 1df982ca4c ensure lighthouse stays disabled while using disassemblers headlessly 2020-04-20 18:05:16 -04:00
gaasedelen 859b994bf7 improve styling cross-platform/DPI 2020-04-20 15:42:08 -04:00
gaasedelen b610b1ee32 fixes bug where combobox would immediately close on binja linux 2020-04-20 04:56:26 -04:00
gaasedelen 33ef4e47a2 fix bug where drcov parser could fail to get correct filename from a crossplatform path... 2020-04-20 04:55:31 -04:00
gaasedelen 4eaca66caa fix bug where refresh could run twice when opening cov overview 2020-04-20 03:17:42 -04:00
gaasedelen 4e8f5d3bbc improve table header styling, dialogs 2020-04-20 03:17:03 -04:00
gaasedelen a943580b2c improved warning dialogs a bit... 2020-04-19 20:52:43 -04:00
gaasedelen 112e5d6a3b fix partial painting in binja... 2020-04-19 18:00:32 -04:00
gaasedelen 75572aed33 fix so the 'prefix' delim does not show in the table / matches what IDA renders... 2020-04-19 18:00:15 -04:00
gaasedelen 7c1573bfd6 cleanup of TODO's 2020-04-19 02:33:56 -04:00
gaasedelen feb83fc5d5 re-enable headless access via lighthouse.get_context(...) 2020-04-18 23:13:23 -04:00
gaasedelen 3f33c3cb45 enable live rebasing (at least... for IDA) 2020-04-18 22:15:57 -04:00
gaasedelen 9c0ecbc81e a few fixes and tweaks for robustness, performance 2020-04-18 20:21:50 -04:00
gaasedelen a0367a85da refresh hexrays views automatically 2020-04-18 03:29:09 -04:00
gaasedelen 4708422c6a improve the stability of the IDA painter 2020-04-18 02:19:42 -04:00
gaasedelen a789220b56 improve metadata collection 2020-04-17 23:46:34 -04:00
gaasedelen bc77c0ece2 wrap force clearing with a waitbox... 2020-04-17 23:01:27 -04:00
gaasedelen 4d94680b94 better resource cleanup when unloading (helps IDA close faster...) 2020-04-17 14:01:04 -04:00
gaasedelen b2c6695042 improve double click jump precision to go to first block with coverage in function 2020-04-17 02:03:03 -04:00
gaasedelen c2ceb47b84 totally ovehaul painting to support notion of 'streaming' 2020-04-17 01:38:40 -04:00
gaasedelen 0d52ef5068 misc fixes / tweaks 2020-04-17 01:37:46 -04:00
gaasedelen 61b8fb7668 better handling of subsystem lifetimes 2020-04-17 01:37:21 -04:00
gaasedelen 68f8c884f9 make theme util function more accessible 2020-04-16 23:46:17 -04:00
gaasedelen a0b375064a improve painting perf 2020-04-15 21:49:03 -04:00
gaasedelen c5dbfb1062 more verbose painter output 2020-04-13 21:11:53 -04:00
gaasedelen ea8cd61122 improve robustness of update check 2020-04-13 20:52:35 -04:00
gaasedelen 4aa116580e minor bugfixs / QOL 2020-04-13 10:04:53 -04:00
gaasedelen 6e6084a058 adds minimal update check 2020-04-13 10:04:25 -04:00
gaasedelen 481c624b7e improve theme reactivity for binja 2020-04-13 08:59:02 -04:00
gaasedelen a3183f21f2 improve theme loading / hinting for IDA 2020-04-12 19:53:58 -04:00
gaasedelen 33b3d4e0b6 Merge branch 'fbl_binja_refactor' into develop 2020-04-12 10:40:18 -04:00
gaasedelen 7fd1e46e53 fixes instructions in partially painted nodes not getting unpainted in binja... 2020-04-12 10:35:53 -04:00
gaasedelen 4ab65c985b cleanup, fixes right click regression with IDA 2020-04-12 09:03:43 -04:00
gaasedelen 28b0ecd49c automatically build the metadata cache when the coverage overview is first shown (if the cache is not already built) 2020-04-12 08:48:06 -04:00
gaasedelen c4cf78c1dd reconcile the binja refactor with IDA 2020-04-12 04:19:04 -04:00
gaasedelen e5b29f97b7 misc cleanup of binja refactor branch & TODO's 2020-04-11 00:00:17 -04:00
gaasedelen 36a37935dc reduce debug prints in binja, manage widget view state per tab 2020-04-10 23:25:55 -04:00
gaasedelen 65fd677758 prioritize navigating to a function start, if a function start block is shared 2020-04-10 21:57:52 -04:00
gaasedelen aa4936e269 enable priority painting for binja 2020-04-10 17:42:40 -04:00
gaasedelen 00f82a2181 only paint instructions in partially executed binja nodes 2020-04-10 16:54:20 -04:00
gaasedelen 1f7e525aeb made 'jump' case insensitive for 'sub_...' funcs in binja 2020-04-10 08:12:49 -04:00
gaasedelen 1a13b23345 fixes bug where one could not 'jump' to a renamed function 2020-04-10 08:11:52 -04:00
gaasedelen 52fb3e70d0 emancipate nodes from functions, this allows coverage to properly be computed for functions that share nodes 2020-04-10 07:46:58 -04:00
gaasedelen ff2c0d9619 switch to pprofile for line profiling 2020-04-10 05:27:08 -04:00
gaasedelen ebea88465f binja-specific perf wins 2020-04-10 03:05:37 -04:00
gaasedelen 67e5caf62d mutual perf wins for metadata caching 2020-04-10 03:03:53 -04:00
gaasedelen a6eeafe180 overhaul for proper binja integration 2020-04-08 11:42:32 -04:00
gaasedelen 8c4e29fe50 bugfix: automatically evaluate the shell expression when switching to the hot shell 2020-04-08 01:43:13 -04:00
gaasedelen 9eca228925 fixes bug that could cause one to be prompted multiple times for a composition name 2020-04-08 01:06:22 -04:00
gaasedelen 553eeb7d23 fixes grammar regression... 2020-04-08 00:43:06 -04:00
gaasedelen 914b731676 fixes bug where combobox could get stuck closed 2020-04-04 23:17:06 -04:00
gaasedelen 6571b0735f aggregate symbol was unusable in compositions 2020-04-04 19:13:34 -04:00
gaasedelen 83e3b423b9 removes range syntax from grammar, buggy & nobody uses it... 2020-04-04 19:10:58 -04:00
gaasedelen 2ed77f305d improve xref dialog for high dpi 2020-04-04 17:44:48 -04:00
gaasedelen 25ff8ed245 fix theme auto color selection bugginess 2020-04-04 16:24:27 -04:00
gaasedelen 3054246a8d bugfix for theme auto paint color selection 2020-04-04 08:30:36 -04:00
gaasedelen 5670e3116c improve xref dialog 2020-04-04 03:19:49 -04:00
gaasedelen 8a2c011636 combobox style tweaks 2020-04-04 02:37:14 -04:00
gaasedelen 6ad0af71cf fix bug when unable to find mappable intructions in cov data 2020-04-04 02:11:59 -04:00
gaasedelen 181b13d0f7 more metadata cache cleanup, robustness, QUALITY SOFTWARE 2020-04-03 21:26:22 -04:00
gaasedelen 5433cdc8e4 streamline metadata collection, allow transition from async to synchronous 2020-04-03 07:18:54 -04:00
gaasedelen 416a46b8ab small QoL tweaks 2020-04-03 07:16:00 -04:00
gaasedelen db1f4ebc68 make coverage xref menu action only appear if there is coverage loaded 2020-04-03 00:19:05 -04:00
gaasedelen c44f35e5f4 add a dark / light coverage paint variant to the themes 2020-04-02 23:55:42 -04:00
gaasedelen 7d7ee5b9f0 improve click + drag text selection in composing shell 2020-04-02 21:38:12 -04:00
gaasedelen 4d36be57c8 add a menu option to dump unmappable coverage data to the console... 2020-04-02 21:06:51 -04:00
gaasedelen 98745a09c4 a few more readme updates 2020-04-02 20:34:27 -04:00
gaasedelen 6dcb3c769c Update README.md
at least some basic updates to the readme so as to not confuse anyone trying to use the dev branch
2020-04-02 20:10:03 -04:00
gaasedelen 11d5f9e62f enforce stricter logic around fuzzy name matching #63 2020-04-02 19:19:52 -04:00
gaasedelen cad8679170 tweaks & typos, i'm tired 2020-04-02 08:13:04 -04:00
gaasedelen 410adc45a5 improve coverage xref styling 2020-04-02 08:03:59 -04:00
gaasedelen 02ea88e3d7 adds fallback selector dialog if the database's loaded module (binary) cannot be found in a coverage file. closes #63 2020-04-02 08:03:35 -04:00
gaasedelen e89a36b9d6 minor binja tweaks, to keep dev working with it for now... 2020-04-02 06:00:34 -04:00
gaasedelen f6902baf38 update drcov parser to account for multi-segment modules 2020-04-02 04:42:57 -04:00
gaasedelen 79c90db5b0 fixes bug where a leftover / mostly deleted coverage overview could get left hanging around 2020-04-02 00:39:30 -04:00
gaasedelen 93228c880b Merge branch 'fbl_themes' into develop 2020-04-02 00:03:42 -04:00
gaasedelen 4b63a0f857 Rename dark theme, again. crosses off themes from readme 2020-04-02 00:02:25 -04:00
gaasedelen fb65c06b1c some cleanup 2020-04-01 23:55:09 -04:00
gaasedelen 2af8854673 add a bit of theme validation ... 2020-04-01 23:54:57 -04:00
gaasedelen 9b85603828 rename 'classic' theme to 'synthwave' 2020-04-01 23:19:54 -04:00
gaasedelen f7ade4eaa3 enable theme change button 2020-04-01 23:18:59 -04:00
gaasedelen be7815ff06 remove hardcoded colors from parts of HTML report 2020-04-01 23:08:08 -04:00
gaasedelen a52b8d5fda split out all theme-dependent code into refreshable functions 2020-04-01 09:15:45 -04:00
gaasedelen 5ae17c85c8 add theme change callback to the palette 2020-04-01 07:32:18 -04:00
gaasedelen 2a2b646f29 minor cleanup / refactoring of palette 2020-03-31 22:43:27 -04:00
gaasedelen 9946863e41 populate user theme dir, auto-select best theme, save/load theme preference from disk, 2020-03-31 19:18:25 -04:00
gaasedelen 2d7d0d598b adds light theme 2020-03-31 02:31:16 -04:00
gaasedelen 72ec6cccf4 updates, more tweakable fields 2020-03-31 02:30:24 -04:00
gaasedelen d53e5032d0 initial theme/palette refactoring 2020-03-31 00:02:37 -04:00
gaasedelen da5942466a update dates and version number, since there won't be an v0.8.4 release... 2020-03-29 17:34:21 -04:00
gaasedelen 0ef5c9d9e1 expose the active CoverageDirector object instance for the disassembler interpreter, or other scripts to use 2020-03-29 17:31:07 -04:00
Jan Beck b1488c3086 Add compatibility with PIN 3.11 (#77)
* Add compatibility with PIN 3.11

Co-authored-by: gaasedelen <gaasedelen@users.noreply.github.com>
2020-03-29 05:36:04 -04:00
lucasg b8a996b5f1 Update README.md to provide more details on finding IDA plugin dir w/ API (#72) 2020-03-29 05:13:27 -04:00
Alexandre Maloteaux e5b9f34193 fix for ida 7.4 with python3.7 (#79)
* fix for ida 7.4 with python3.7

* fix html export too

* fix painter

* tweakd to use our own dict shims

* a few more minor fixes

Co-authored-by: gaasedelen <markus.gaasedelen@gmail.com>
2020-03-29 04:59:29 -04:00
gaasedelen 257d69594e fully deprecates IDA 6.X 2020-03-29 03:43:04 -04:00
gaasedelen 28ea6b8821 make drcov.py work natively outside of lighthouse 2020-03-29 02:43:28 -04:00
gaasedelen d93b52354f make it so that partially executed nodes do not have their whole graph node colored 2020-03-28 23:03:24 -04:00
gaasedelen e0d309025f improve lighthouse's accuracy on interleaved instructions 2020-03-25 06:22:45 -04:00
gaasedelen c57296e649 fix improper usage of traceback 2020-03-25 03:11:58 -04:00
gaasedelen c9788723ba remove unecessary use of map, can cause ambiguity if left unevaluated 2020-03-25 03:11:04 -04:00
gaasedelen 29ecafd885 fixes bug where binja could hang while building metadata 2020-03-22 03:10:27 -04:00
gaasedelen b9d514823b shady temp fix to make lighthouse mostly work with current binja 2020-03-21 04:36:28 -04:00
gaasedelen 2eab6d902c python 3 compat tweaks 2020-03-21 04:36:28 -04:00
gaasedelen 1ddffa5a19 most of the plumbing for supporting an image rebase, barring more testin... 2020-03-21 04:36:28 -04:00
gaasedelen e70c246637 make BADADDR more 'universal' 2020-03-21 04:36:28 -04:00
gaasedelen c0afc3b0c1 create 'is disassembelr busy' api 2020-03-21 04:36:28 -04:00
gaasedelen 3fd3640517 Improves coverage exception warning code 2020-03-21 04:36:28 -04:00
gaasedelen e4ecc0dafd fixes minor Qt warning for IDA 7.4 2020-03-21 04:36:28 -04:00
Jakob Pearson ac12ef74db Minor changes, to compile on Windows with VS2017 (#69)
* Update build-x64.bat

* Update build-x86.bat

* Update ImageManager.cpp

* Update CodeCoverage.cpp

* Update CodeCoverage.cpp

* Update ImageManager.cpp
2019-10-22 12:13:27 -07:00
yrp 8593e976d1 Derp, syntax... 2019-10-22 15:06:42 -04:00
yrp b505af9956 Allow blank and comments in modoff 2019-10-22 15:06:42 -04:00
yrp 7578faea82 Fix directory types
Tbh I didn't analyze this in depth -- it was a type error causing
lighthouse to bail out so I just mashed everything into a set. It seems
to work tho, so #shipit
2019-10-22 15:06:42 -04:00
Dominik Maier 7caa33df2e Fixed python3 bytesstrings (#71) 2019-10-22 15:03:17 -04:00
gaasedelen 9b1d579d3d improve disassembler logging compatibility 2019-05-18 16:56:11 -04:00
gaasedelen f6932bd8d0 adds qt_mainthread global 2019-05-17 19:17:39 -04:00
gaasedelen 80404a9f57 make disassembler version fields less annoying 2019-05-17 14:08:25 -04:00
volokitinss 366df2f5d8 Match upper case hex symbols as well (#67) 2019-05-10 12:29:52 -04:00
gaasedelen 7303d72fa2 python 3 compat 2019-04-19 19:58:29 -04:00
gaasedelen 8a656b10d3 tweak: make the function name column left aligned by default 2019-04-19 19:53:03 -04:00
gaasedelen 80d65fd584 styling consistency improvements 2019-04-19 19:52:34 -04:00
gaasedelen 8a03f035a1 improve shell / combobox styling 2019-04-17 17:15:18 -04:00
gaasedelen 9558763e52 bugfix: collapse the coverage combobox when its head is clicked in the expanded state 2019-04-16 18:24:40 -04:00
gaasedelen b82dba0d60 tweak: removes unnecessary composer shell border imposed by some Qt configs 2019-04-16 14:52:32 -04:00
gaasedelen e6f02ab873 allow smooth horizontal scrolling in xref window 2019-04-02 17:20:57 -04:00
gaasedelen 8316012782 switch xref attribution to bb granularity 2019-04-02 17:03:15 -04:00
gaasedelen fa3a13a085 allow for xrefing a batch 2019-04-02 12:26:42 -04:00
gaasedelen d6d0fbc7dd allow for smooth horizontal scrolling in coverage table 2019-04-01 13:54:42 -04:00
gaasedelen 1463949921 Merge branch 'fbl_block_blame' into develop, resolves #8 2019-03-31 19:56:36 -04:00
gaasedelen b6bf203b8c cleanup IDA integration of xrefs 2019-03-31 19:48:30 -04:00
gaasedelen 8052798c23 cleanup the xref dialog a bit 2019-03-31 19:17:47 -04:00
gaasedelen f89f3609f9 fix issue where coverage names are not saved to composed sets 2019-03-31 19:06:57 -04:00
gaasedelen 02d52fce73 add timestamps to coverage 2019-03-31 18:55:24 -04:00
gaasedelen 44cb1c8113 rough coverage xref dialog 2019-03-31 17:57:57 -04:00
gaasedelen dd50a3a8c8 rough draft of coverage xref for ida #8 2019-03-30 17:52:23 -04:00
gaasedelen ffc45f86a8 further refactor some loading code 2019-03-30 14:13:50 -04:00
gaasedelen e4ea2956e8 improve batch loading, refactor loading in general 2019-03-30 14:13:49 -04:00
gaasedelen 5468bef842 refactoring some loading code, add 'trace' format 2019-03-30 14:12:56 -04:00
gaasedelen 420c735600 rename testcases 2019-03-30 14:11:17 -04:00
gaasedelen beb715cf9b tweaks to reader & cov file code 2019-03-30 14:11:17 -04:00
gaasedelen 72fe0f90e6 add instruction sizes to metadata cache 2019-03-30 14:10:11 -04:00
xarkes 7cab1c1db9 Added python3 support (#62)
* Added Python3 support
* remove dependency of six library
* fixups based on Binja with Py3.6
2019-03-17 20:56:28 -04:00
gaasedelen b27698520b split metadata cache out of director 2019-03-15 19:48:57 -04:00
gaasedelen a55ede77f9 bugfix: deleted functions were not properly removed from metadata cache on refresh 2019-03-15 19:47:26 -04:00
gaasedelen 0f51554d8c suppress some IDA warning messages during func renames 2019-03-15 19:43:26 -04:00
gaasedelen 50cfd522f5 fix bad code in recent coverage loading work 2019-03-15 19:42:17 -04:00
gaasedelen 03717b03a2 rough plumbing for additional coverage formats #41 2019-03-10 16:06:10 -04:00
gaasedelen 92dfae662c Merge pull request #53 from v-p-b/binja_api_fix
Binary Ninja API fixups
2019-02-15 15:41:23 -05:00
gaasedelen 3b1c776455 Merge pull request #56 from AndrewFasano/master
Add python3 support for drcov parser
2019-01-31 12:37:04 -05:00
Andrew Fasano 5423bbf7e9 Add python3 support for drcov parser
And remain compatible with python2
2019-01-25 15:50:24 -05:00
b 0a5870a65a Added comments 2019-01-06 09:36:18 +01:00
b f3fd77e863 Stable+Dev compatibility 2019-01-06 08:55:50 +01:00
b 4b5ceabed1 Binja API: Getters -> Data members 2019-01-02 16:06:27 +01:00
gaasedelen 675cc8738d Merge branch 'develop' 2018-12-12 13:49:09 -05:00
gaasedelen 1ab2212115 Merge pull request #51 from yrp604/patch-1
Fix buggy drcov parsers
2018-12-12 13:37:59 -05:00
gaasedelen 713ac9d2aa Merge branch 'develop' into patch-1 2018-12-12 13:37:34 -05:00
_yrp 357bc5f839 Fix buggy drcov parsers
These two are definitely wrong, the other one might be too.

Found when some jerk compiled their own linux drcov plugin -- otherwise the precompiled binaries still spit out v2 module tables.
2018-12-11 20:45:35 -08:00
gaasedelen 8d60f778ef fixes botched fix, #50 2018-12-09 21:37:03 -05:00
gaasedelen 677723ed18 group lighthouse plugin menus in binja 2018-12-02 07:54:45 -05:00
gaasedelen b6e9ffe576 update to work with binja dev (breaks stable :-x) 2018-12-02 07:49:22 -05:00
gaasedelen dae71152e4 fixes thread safety issue in IDA 7.2, #50 2018-12-02 07:35:07 -05:00
gaasedelen f071dc6dce minor updates to pin documentation, #46 2018-11-18 20:59:21 -05:00
gaasedelen 83a2d71402 Merge pull request #49 from Ayrx/revert-47-improve-frida-script
Revert "Improve Frida coverage collection on quick exits."
2018-11-06 14:27:58 -05:00
Terry Chia e890ac60e9 Revert "Improve Frida coverage collection on quick exits." 2018-11-06 16:04:19 +08:00
gaasedelen 2b82bb69d5 fix edge case that could crash painter thread 2018-11-06 01:00:22 -05:00
gaasedelen 7cc235bde4 fix pintool on newer versions of gcc #46 2018-11-05 20:34:29 -05:00
gaasedelen 53e791b96e Merge pull request #47 from Ayrx/improve-frida-script
Improve Frida coverage collection on quick exits.
2018-10-31 14:36:21 -04:00
Terry Chia 48fd65de34 Improve Frida coverage collection on quick exits. 2018-10-30 14:48:20 +08:00
gaasedelen e6cf1d8292 Update drcov format v4 parsing for Windows, #45 2018-10-19 15:05:37 -04:00
gaasedelen 04225a0351 avoid division by zero, fixes #44 2018-10-19 15:00:45 -04:00
gaasedelen d3ba2c0ad7 hotfix for binja versioning bug #42 2018-10-12 11:27:20 -04:00
gaasedelen a81aa6d590 hotfix for personal binja licenses 2018-10-11 10:39:01 -04:00
gaasedelen fbbdd91576 Merge branch 'hotfix-0.8.1' 2018-10-03 01:45:19 -04:00
gaasedelen 03cc67e629 further tweak table row height based on DPI 2018-10-03 01:44:41 -04:00
gaasedelen 116f82e6bc only disable painting (on clear) if other coverage is open 2018-10-02 16:45:42 -04:00
gaasedelen 3f22eddc82 auto-cache metadata when opening coverage overview 2018-10-02 16:44:15 -04:00
gaasedelen 2bc6f109db fix for minor regression 2018-10-02 16:00:25 -04:00
gaasedelen 316e7622d7 bumps version number 2018-09-29 17:40:44 -04:00
gaasedelen bf66b02df9 fixes DPI regressions 2018-09-29 17:39:24 -04:00
gaasedelen bd8862923e fixes regression that could cause IDA 6.8 --> 7.0 to hang on exit 2018-09-29 17:34:43 -04:00
gaasedelen 8e09989a67 Merge branch 'develop' 2018-09-23 17:53:35 -04:00
gaasedelen d81e33f5f5 updated readme 2018-09-23 17:51:49 -04:00
gaasedelen 35b45b2a7b minor bugfix 2018-09-23 16:29:01 -04:00
gaasedelen fe0ff33d38 bump version number 2018-09-22 21:17:51 -04:00
gaasedelen 0f2b9897f8 Merge branch 'master' into develop 2018-09-22 20:10:12 -04:00
gaasedelen 72725d7b86 disable lighthouse in binja headless 2018-09-22 20:08:05 -04:00
gaasedelen aaea2cb472 add filepath tooltip to combobox 2018-09-21 21:07:27 -04:00
gaasedelen d1fc0f7f06 comment cleanup, spellchecks, etc 2018-09-21 20:33:17 -04:00
gaasedelen b6d52dad5c fix binja bug where blank names did not reset 2018-09-04 12:30:37 -04:00
gaasedelen 419b77aebe minor fixes, painting, table 2018-09-04 11:58:42 -04:00
gaasedelen 7946d32edd improve cross-platform styling/appearance 2018-09-03 18:05:19 -04:00
gaasedelen 8ea8f6c2d7 fix crashes & hacks for binja dock widget 2018-09-02 19:07:58 -04:00
gaasedelen c532f1f658 minor fixes 2018-09-02 18:07:42 -04:00
gaasedelen e4a1c1cd70 refactor some metadata collection code, fix some binja deadlocks 2018-09-02 18:07:25 -04:00
gaasedelen 5dc9c21141 update disassembler shimming/API a bit 2018-09-02 17:34:15 -04:00
gaasedelen 13e601610a adds basic export to HTML functionality 2018-09-02 10:19:55 -04:00
gaasedelen 5d2abe6162 refactor DatabaseCoverage instantiation, director 2018-09-01 20:58:29 -04:00
gaasedelen 4a26f7b040 minor fixes 2018-09-01 12:46:26 -04:00
gaasedelen 2f78f4638b refactor coverage loading, adds more verbose warnings 2018-09-01 12:34:46 -04:00
gaasedelen 3defabc4f9 add a property to identify abnormal coverage mappings 2018-09-01 12:34:46 -04:00
gaasedelen 0222c1a16a more explicit warnings for coverage load failures 2018-09-01 12:34:45 -04:00
gaasedelen 6a08093d2f Merge branch 'fbl_table_rework' into develop 2018-09-01 12:22:09 -04:00
gaasedelen b98baa6206 minor fixes 2018-08-30 23:13:35 -04:00
gaasedelen b68c9e4dc5 creates coverage settings menu 2018-08-30 23:02:28 -04:00
gaasedelen c7f80b7bfe refactor some core painting 2018-08-30 23:01:49 -04:00
gaasedelen 4baaa3500d minor fix 2018-08-30 23:00:32 -04:00
gaasedelen 011b06d936 hopefully more stable painting for IDA 6.8 -> 7.1 2018-08-27 02:11:42 -04:00
gaasedelen dfe7407e67 allow the painter to be enabled/disabled 2018-08-27 02:08:51 -04:00
gaasedelen 9b2d8b8273 binary ninja now supports original filename 2018-08-23 04:16:30 -04:00
gaasedelen 8df09f23d5 some plumbing for potential future features 2018-08-17 20:41:42 -04:00
gaasedelen 3300c5abcf cleanup CoverageTableModel 2018-08-14 19:58:40 -04:00
gaasedelen 109fcd1aac remove random file :') 2018-08-14 18:45:16 -04:00
gaasedelen 0aa2b9602f enable table navigation with arrow keys & vim keys 2018-08-12 13:41:50 -04:00
gaasedelen fbf0a0da10 combobox theme enlightenment 2018-08-11 23:36:17 -04:00
gaasedelen a7166eb9f2 enlighten shell hint to palette colors 2018-08-11 22:57:45 -04:00
gaasedelen f4ba88f581 fix regression with IDA 2018-08-11 22:23:37 -04:00
gaasedelen b0c4f16ee4 make the shell hint even less agressive 2018-08-11 22:16:04 -04:00
gaasedelen 868e071483 split coverage table into Model/View/Controller 2018-08-11 22:05:10 -04:00
gaasedelen 530941195a misc 2018-08-11 19:03:03 -04:00
gaasedelen 20305574a2 made composing shell hint less aggressive 2018-08-11 14:14:53 -04:00
gaasedelen 5c84de332a cleanup IDA painting a bit 2018-08-11 13:58:56 -04:00
gaasedelen 0ad5ac72d8 fix rename events misfiring on binja 2018-08-11 13:58:36 -04:00
gaasedelen 8610983f98 better cyclomatic complexity 2018-08-11 13:57:32 -04:00
gaasedelen 8cbc9d9bef toggle coverage table column alignment 2018-08-10 20:00:57 -04:00
gaasedelen 53f58e5560 adds experimental support for Binary Ninja 2018-08-10 12:46:13 -04:00
gaasedelen c509770fc3 some cleanup, fixes a few IDA compatibility breakages 2018-08-09 20:09:33 -04:00
gaasedelen 33e054ce34 update dev scripts 2018-08-09 20:08:15 -04:00
gaasedelen c48af5b9d0 fixes for renaming/prefixes via table 2018-08-09 18:10:22 -04:00
gaasedelen de064f088e makes lighthouse theme aware for binja 2018-08-09 17:00:51 -04:00
gaasedelen bf37c29dc0 refactored Qt usage, dockable binja widgets 2018-08-09 15:07:04 -04:00
gaasedelen ea3fbcb6f8 catch binaryninja rename events 2018-08-07 23:18:35 -04:00
gaasedelen 1492ea0c20 *greatly* improved binja metadata collection speed 2018-08-07 20:14:40 -04:00
gaasedelen 92797f30b0 improved binja painting 2018-08-07 18:28:04 -04:00
gaasedelen 6cadbf998e updated & categorized TODO's 2018-08-07 14:25:07 -04:00
gaasedelen 5fdb2f1735 added other lighthouse menu options to binja 2018-08-07 13:50:58 -04:00
gaasedelen 895951ecc9 general table appearance improvements for binja 2018-08-07 13:29:51 -04:00
gaasedelen 4b587696ab fix bug parsing binja dev string 2018-08-07 13:28:21 -04:00
gaasedelen 74798d1fb8 slight tweaks to the combobox spacing/grid lines 2018-08-07 13:27:59 -04:00
gaasedelen fef71a7092 disassembler API now built on Abstract Classes 2018-08-06 17:17:52 -04:00
gaasedelen b75d4947b5 strip copied code from binja painter 2018-08-05 15:05:42 -04:00
gaasedelen 2b42355e94 improved painting performance on BN 2018-08-05 15:02:14 -04:00
gaasedelen 150a984bd9 enable logging in IDA launchers 2018-08-03 20:10:41 -04:00
gaasedelen ba0f5508a7 functioning binja prototype 2018-08-03 20:10:11 -04:00
gaasedelen 6b425c1977 split painting engine 2018-08-03 17:02:08 -04:00
gaasedelen 9e5e898952 core coverage loading works in binja 2018-08-03 11:09:09 -04:00
gaasedelen 9c2c9d8850 shim metadata collection 2018-08-02 23:41:07 -04:00
gaasedelen bf3389740c remove idaapi dep from coverage_overview 2018-08-02 20:15:54 -04:00
gaasedelen a9ada01183 refactor mainthread detection to be agnostic of disassmbler 2018-08-02 19:58:27 -04:00
gaasedelen fa605c8261 remove idaapi dep from coverage 2018-08-02 19:49:17 -04:00
gaasedelen 898788cdbb migrate to universal waitbox 2018-08-02 18:32:10 -04:00
gaasedelen c19ffd275c remove idaapi/idautils as core dependency 2018-08-02 18:15:25 -04:00
gaasedelen c0477c4724 move prompt_string to qt 2018-08-02 18:14:53 -04:00
gaasedelen 8996e4e009 Integration seperation 2018-08-02 16:42:38 -04:00
gaasedelen 7202e2f455 start of binja port 2018-08-01 17:42:22 -04:00
gaasedelen 258b901273 Revert "refactor coverage table code into its own QTableView"
This reverts commit 26f391b4b4.
2018-08-01 17:40:15 -04:00
gaasedelen d8db54dcbd Merge pull request #39 from yrp604/master
Improve frida coverage saving
2018-07-10 10:44:44 -04:00
yrp b92a25906f Make coverage saving more robust
The most bug prone part of frida (in our usage) seems to be process
detatching. Previously the script would detatch and then persist all
coverage data once detatching had succeeded. This of course means that
if detatching fails or hangs, all collected coverage is lost. This is
Not Good.

This commit adds a SIGINT handler which persists the existing coverage
before calling `_exit(1)`. While this doesn't seem to be an especially
quick operation, it does save the coverage before exiting.

This commit does not address Windows usage, or processes exiting in ways
other than SIGINT.
2018-07-10 00:24:36 -07:00
_yrp bb9c4bafd0 Merge pull request #1 from gaasedelen/master
merge upstream
2018-07-09 22:26:24 -07:00
gaasedelen 5028b163d7 Merge pull request #35 from vnd-eset/master
Add support for ASCII DrCov logs
2018-04-27 12:54:45 -04:00
gaasedelen 1cbff296af Merge branch 'hotfix-0.7.2' 2018-04-23 21:42:54 -04:00
gaasedelen 36127e74ef hotfix for new drcov log formats, issue #36 2018-04-23 21:42:07 -04:00
gaasedelen 26f391b4b4 refactor coverage table code into its own QTableView 2018-04-17 19:02:31 -04:00
Krzysztof Katowicz-Kowalewski d48f88d688 Add support for ASCII DrCov logs 2018-04-11 05:59:38 -07:00
gaasedelen e0cf26acc8 Merge pull request #33 from x9090/dev-x9090
Fixed missing node metadata due to overly aggressive filter
2018-03-23 13:02:35 -04:00
x9090 4d529e2844 Fixed missing node metadata due to overly aggressive filter 2018-03-20 11:55:45 +08:00
gaasedelen 006c46b472 hotfix for MacOS hangs (and crashes) on close #31 2017-12-29 23:16:16 -05:00
gaasedelen f2dff09b99 Merge pull request #28 from paulsapps/feature_copy_multiple_function_name_and_addresses
Allow copy operations to work on multiple selected functions
2017-12-17 20:44:11 -05:00
Paul 3024fd208d Allow copy operations to work on multiple selected functions 2017-12-18 00:51:00 +00:00
107 changed files with 29874 additions and 7111 deletions
+1 -1
View File
@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2017 Markus Gaasedelen
Copyright (c) 2017-2024 Markus Gaasedelen
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
+91 -76
View File
@@ -1,18 +1,21 @@
# Lighthouse - Code Coverage Explorer for IDA Pro
# Lighthouse - A Coverage Explorer for Reverse Engineers
<p align="center">
<img alt="Lighthouse Plugin" src="screenshots/overview.png"/>
<img alt="Lighthouse Plugin" src="screenshots/overview.gif"/>
</p>
## Overview
Lighthouse is a code coverage plugin for [IDA Pro](https://www.hex-rays.com/products/ida/). The plugin leverages IDA as a platform to map, explore, and visualize externally collected code coverage data when symbols or source may not be available for a given binary.
Lighthouse is a powerful code coverage explorer for [IDA Pro](https://www.hex-rays.com/products/ida/) and [Binary Ninja](https://binary.ninja/), providing software researchers with uniquely interactive controls to study execution maps for native applications without requiring symbols or source.
This plugin is labeled only as a prototype and IDA / Qt code example for the community.
This project placed 2nd in IDA's [2017 Plug-In Contest](https://hex-rays.com/contests_details/contest2017/) and was later [nominated](https://pwnies.com/lighthouse/) in the 2021 Pwnie Awards for its contributions to the security research industry.
Special thanks to [@0vercl0k](https://twitter.com/0vercl0k) for the inspiration.
## Releases
* v0.9 -- Python 3 support, custom coverage formats, coverage cross-refs, theming subsystem, much more.
* v0.8 -- Binary Ninja support, HTML coverage reports, consistent styling, many tweaks, bugfixes.
* v0.7 -- Frida, C++ demangling, context menu, function prefixing, tweaks, bugfixes.
* v0.6 -- Intel pintool, cyclomatic complexity, batch load, bugfixes.
* v0.5 -- Search, IDA 7 support, many improvements, stability.
@@ -21,72 +24,93 @@ Special thanks to [@0vercl0k](https://twitter.com/0vercl0k) for the inspiration.
* v0.2 -- Multifile support, performance improvements, bugfixes.
* v0.1 -- Initial release
## Installation
# Installation
Install Lighthouse into the IDA plugins folder.
Lighthouse is a cross-platform (Windows, macOS, Linux) Python 2/3 plugin. It takes zero third party dependencies, making the code both portable and easy to install.
- Copy the contents of the `plugin` folder to the IDA plugins folder
- On Windows, the folder is at `C:\Program Files (x86)\IDA 6.8\plugins`
- On MacOS, the folder is at `/Applications/IDA\ Pro\ 6.8/idaq.app/Contents/MacOS/plugins`
- On Linux, the folder may be at `/opt/IDA/plugins/`
Use the instructions below for your respective disassembler.
The plugin is compatible with IDA Pro 6.8 --> 7.0 on Windows, MacOS, and Linux.
## IDA Installation
## Usage
1. From IDA's Python console, run the following command to find its plugin directory:
- `import idaapi, os; print(os.path.join(idaapi.get_user_idadir(), "plugins"))`
2. Copy the contents of this repository's `/plugins/` folder to the listed directory.
3. Restart your disassembler.
Lighthouse loads automatically when an IDB is opened, installing a handful of menu entries into the IDA interface.
## Binary Ninja Installation
Lighthouse can be installed through the plugin manager on Binary Ninja, supporting v3.5 and newer.
1. Open Binary Ninja's plugin manager by navigating the following submenus:
- `Edit` -> `Preferences` -> `Manage Plugins`
2. Search for Lighthouse in the plugin manager, and click the `Enable` button in the bottom right.
3. Restart your disassembler.
# Usage
Once properly installed, there will be a few new menu entries available in the disassembler. These are the entry points for a user to load coverage data and start using Lighthouse.
<p align="center">
<img alt="Lighthouse Menu Entries" src="screenshots/open.gif"/>
</p>
These are the entry points for a user to load and view coverage data.
```
- File --> Load file --> Code coverage file...
- File --> Load file --> Code coverage batch...
- View --> Open subviews --> Coverage Overview
```
Batch load can quickly aggregate hundreds (thousands?) of collected coverage files into a single composite at load time.
Lighthouse is able to load a few different 'flavors' of coverage data. To generate coverage data that can be loaded into Lighthouse, please look at the [README](https://github.com/gaasedelen/lighthouse/tree/master/coverage) in the coverage directory of this repository.
## Coverage Painting
Lighthouse 'paints' the active coverage data across the three major IDA views as applicable. Specifically, the Disassembly, Graph, and Pseudocode views.
While Lighthouse is in use, it will 'paint' the active coverage data across all of the code viewers available in the disassembler. Specifically, this will apply to your linear disassembly, graph, and decompiler windows.
<p align="center">
<img alt="Lighthouse Coverage Painting" src="screenshots/painting.png"/>
</p>
## Coverage Overview
In Binary Ninja, only the linear disassembly, graph, and IL views are supported. Support for painting decompiler output in Binary Ninja will be added to Lighthouse in the *near future* as the feature stabilizes.
The Coverage Overview is a dockable widget that provides a function level view of the active coverage data for the database.
# Coverage Overview
The Coverage Overview is a dockable widget that will open up once coverage has been loaded into Lighthouse.
<p align="center">
<img alt="Lighthouse Coverage Overview" src="screenshots/overview.png"/>
</p>
This table can be sorted by column, and entries can be double clicked to jump to their corresponding disassembly.
This interactive widget provides a function level view of the loaded coverage data. It also houses a number of tools to manage loaded data and drive more advanced forms of coverage analysis.
## Context Menu
Right clicking the table in the Coverage Overview will produce a context menu with a few basic amenities.
Right clicking the table in the Coverage Overview will produce a context menu with a few basic amenities to extract information from the table, or manipulate the database as part of your reverse engineering process.
<p align="center">
<img alt="Lighthouse Context Menu" src="screenshots/context_menu.gif"/>
</p>
These actions can be used to quickly manipulate or interact with entries in the table.
If there are any other actions that you think might be useful to add to this context menu, please file an issue and they will be considered for a future release of Lighthouse.
## Coverage Composition
## Coverage ComboBox
Building relationships between multiple sets of coverage data often distills deeper meaning than their individual parts. The shell at the bottom of the [Coverage Overview](#coverage-overview) provides an interactive means of constructing these relationships.
Loaded coverage and user constructed compositions can be selected or deleted through the coverage combobox.
<p align="center">
<img alt="Lighthouse Coverage ComboBox" src="screenshots/combobox.gif"/>
</p>
## HTML Coverage Report
Lighthouse can generate rudimentary HTML coverage reports. A sample report can be seen [here](https://rawgit.com/gaasedelen/lighthouse/master/testcase/report.html).
<p align="center">
<img alt="Lighthouse HTML Report" src="screenshots/html_report.gif"/>
</p>
# Coverage Shell
At the bottom of the coverage overview window is the coverage shell. This shell can be used to perform logic-based operations that combine or manipulate the loaded coverage sets.
<p align="center">
<img alt="Lighthouse Coverage Composition" src="screenshots/shell.gif"/>
</p>
Pressing `enter` on the shell will evaluate and save a user constructed composition.
This feature is extremely useful in exploring the relationships of program execution across multiple runs. In other words, the shell can be used to 'diff' execution between coverage sets and extract a deeper meaning that is otherwise obscured within the noise of their individual parts.
## Composition Syntax
@@ -94,16 +118,27 @@ Coverage composition, or _Composing_ as demonstrated above is achieved through a
### Grammar Tokens
* Logical Operators: `|, &, ^, -`
* Coverage Symbol: `A, B, C, ..., Z`
* Coverage Range: `A,C`, `Q,Z`, ...
* Coverage Symbol: `A, B, C, ..., Z, *`
* Parenthesis: `(...)`
### Example Compositions
* `A & B`
* `(A & B) | C`
* `(C & (A - B)) | (F,H & Q)`
The evaluation of the composition may occur right to left, parenthesis are suggested for potentially ambiguous expressions.
1. Executed code that is *shared* between coverage `A` and coverage `B`:
```
A & B
```
2. Executed code that is *unique* only to coverage `A`:
```
A - B
```
3. Executed code that is *unique* to `A` or `B`, but not `C`:
```
(A | B) - C
```
Expressions can be of arbitrary length or complexity, but the evaluation of the composition may occur right to left. So parenthesis are suggested for potentially ambiguous expressions.
## Hot Shell
@@ -117,7 +152,7 @@ The hot shell serves as a natural gateway into the unguided exploration of compo
## Search
Using the shell, one can search and filter the functions listed in the coverage table by prefixing their query with `/`.
Using the shell, you can search and filter the functions listed in the coverage table by prefixing their query with `/`.
<p align="center">
<img alt="Lighthouse Search" src="screenshots/search.gif"/>
@@ -133,64 +168,44 @@ Entering an address or function name into the shell can be used to jump to corre
<img alt="Lighthouse Jump" src="screenshots/jump.gif"/>
</p>
## Coverage ComboBox
# Coverage Cross-references (Xref)
Loaded coverage data and user constructed compositions can be selected or deleted through the coverage combobox.
While using Lighthouse, you can right click any basic block (or instruction) and use the 'Coverage Xref' action to see which coverage sets executed the selected block. Double clicking any of the listed entries will instantly switch to that coverage set.
<p align="center">
<img alt="Lighthouse Coverage ComboBox" src="screenshots/combobox.gif"/>
<img alt="Lighthouse Xref" src="screenshots/xref.gif"/>
</p>
# Collecting Coverage
This pairs well with the 'Coverage Batch' feature, which allows you to quickly load and aggregate thousands of coverage files into Lighthouse. Cross-referencing a block and selecting a 'set' will load the 'guilty' set from disk as a new coverage set for you to explore separate from the batch.
Before using Lighthouse, one will need to collect code coverage data for their target binary / application.
# Themes
The examples below demonstrate how one can use [DynamoRIO](http://www.dynamorio.org), [Intel Pin](https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool) or [Frida](https://www.frida.re) to collect Lighthouse compatible coverage against a target. The `.log` files produced by these instrumentation tools can be loaded directly into Lighthouse.
Lighthouse ships with two default themes -- a 'light' theme, and a 'dark' one. Depending on the colors currently used by your disassembler, Lighthouse will attempt to select the theme that seems most appropriate.
## DynamoRIO
<p align="center">
<img alt="Lighthouse Themes" src="screenshots/themes.png"/>
</p>
Code coverage data can be collected via DynamoRIO's [drcov](http://dynamorio.org/docs/page_drcov.html) code coverage module.
The theme files are stored as simple JSON on disk and are highly configurable. If you are not happy with the default themes or colors, you can create your own themes and simply drop them in the user theme directory.
Example usage:
```
..\DynamoRIO-Windows-7.0.0-RC1\bin64\drrun.exe -t drcov -- boombox.exe
```
## Intel Pin (Experimental)
Using a [custom pintool](coverage/pin) contributed by [Agustin Gianni](https://twitter.com/agustingianni), the Intel Pin DBI can also be used to collect coverage data.
Example usage:
```
pin.exe -t CodeCoverage64.dll -- boombox.exe
```
For convenience, binaries for the Windows pintool can be found on the [releases](https://github.com/gaasedelen/lighthouse/releases/tag/v0.7.0) page. MacOS and Linux users need to compile the pintool themselves following the [instructions](coverage/pin#compilation) included with the pintool for their respective platforms.
## Frida (Experimental)
Lighthouse offers limited support for Frida based code coverage via a custom [instrumentation script](coverage/frida) contributed by [yrp](https://twitter.com/yrp604).
Example usage:
```
sudo python frida-drcov.py bb-bench
```
Lighthouse will remember your theme preference for future loads and uses.
# Future Work
Time and motivation permitting, future work may include:
* Nag Vector35 to fix HLIL highlighting ([bug](https://github.com/Vector35/binaryninja-api/issues/2584)) in Binary Ninja
* ~~Asynchronous composition, painting, metadata collection~~
* ~~Multifile/coverage support~~
* Profiling based heatmaps/painting
* Coverage & Profiling Treemaps
* Coverage & profiling treemaps
* ~~Additional coverage sources, trace formats, etc~~
* Improved Pseudocode painting
* Improved pseudocode painting
* ~~Lighthouse console access~~, headless usage
* ~~Custom themes~~
* ~~Python 3 support~~
I welcome external contributions, issues, and feature requests.
I welcome external contributions, issues, and feature requests. Please make any pull requests to the `develop` branch of this repository if you would like them to be considered for a future release.
# Authors
+11
View File
@@ -0,0 +1,11 @@
# Lighthouse - A Coverage Explorer for Reverse Engineers
<p align="center">
<img alt="Lighthouse Plugin" src="https://raw.githubusercontent.com/gaasedelen/lighthouse/master/screenshots/overview.gif"/>
</p>
## Overview
Lighthouse is a powerful code coverage explorer for [IDA Pro](https://www.hex-rays.com/products/ida/) and [Binary Ninja](https://binary.ninja/), providing software researchers with uniquely interactive controls to study execution maps for native applications without requiring symbols or source.
For additional usage information, please check out the full [README](https://github.com/gaasedelen/lighthouse) on GitHub.
+26
View File
@@ -0,0 +1,26 @@
import os
import sys
#------------------------------------------------------------------------------
# Binary Ninja 'Plugin Manager' Stub
#------------------------------------------------------------------------------
#
# This file is an alternative loading stub created specifically to
# support the ability to 'easy' install Lighthouse into Binary Ninja
# via its 'Plugin Manager' functionality.
#
# Please disregard this code / subdirectory if performing **manual**
# installations of Lighthouse in IDA or Binary Ninja.
#
lh_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "..", "plugins")
sys.path.append(lh_path)
from lighthouse.util.log import logging_started, start_logging
from lighthouse.util.disassembler import disassembler
if not logging_started():
logger = start_logging()
logger.info("Selecting Binary Ninja loader...")
from lighthouse.integration.binja_loader import *
+24
View File
@@ -0,0 +1,24 @@
{
"api": [
"python3"
],
"author": "Markus Gaasedelen",
"description": "A Coverage Explorer for Reverse Engineers",
"license": {
"name": "MIT",
"text": "Copyright (c) 2024> Markus Gaasedelen\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
},
"longdescription": "",
"minimumbinaryninjaversion": 4526,
"name": "Lighthouse",
"platforms": [
"Darwin",
"Linux",
"Windows"
],
"pluginmetadataversion": 2,
"type": [
"helper"
],
"version": "0.9.4"
}
+79
View File
@@ -0,0 +1,79 @@
# Collecting Coverage
Before using Lighthouse, one will need to collect code coverage data for their target binary / application.
The examples below demonstrate how one can use [DynamoRIO](http://www.dynamorio.org), [Intel Pin](https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool) or [Frida](https://www.frida.re) to collect Lighthouse compatible coverage against a target. The `.log` files produced by these instrumentation tools can be loaded directly into Lighthouse.
## DynamoRIO
Code coverage data can be collected via DynamoRIO's [drcov](http://dynamorio.org/docs/page_drcov.html) code coverage module.
Example usage:
```
..\DynamoRIO-Windows-7.0.0-RC1\bin64\drrun.exe -t drcov -- boombox.exe
```
## Intel Pin
Using a [custom pintool](pin/README.md) contributed by [Agustin Gianni](https://twitter.com/agustingianni), the Intel Pin DBI can also be used to collect coverage data.
Example usage:
```
pin.exe -t CodeCoverage64.dll -- boombox.exe
```
For convenience, binaries for the Windows pintool can be found on the [releases](https://github.com/gaasedelen/lighthouse/releases) page. macOS and Linux users need to compile the pintool themselves following the [instructions](pin/README.md#compilation) included with the pintool for their respective platforms.
## Frida (Experimental)
Lighthouse offers limited support for Frida based code coverage via a custom [instrumentation script](frida/README.md) contributed by [yrp](https://twitter.com/yrp604).
Example usage:
```
sudo python frida-drcov.py bb-bench
```
# Other Coverage Formats
Lighthouse is flexible as to what kind of coverage or 'trace' file formats it can load. Below is an outline of these human-readable text formats that are arguably the easiest to output from a custom tracer.
## Module + Offset (modoff)
A 'Module+Offset' coverage file / trace is a highly recommended coverage format due to its simplicity and readability:
```
boombox+3a06
boombox+3a09
boombox+3a0f
boombox+3a15
...
```
Each line of the trace represents an executed instruction or basic block in the instrumented program. The line *must* name an executed module eg `boombox.exe` and a relative offset to the executed address from the imagebase.
It is okay for hits from other modules (say, `kernel32.dll`) to exist in the trace. Lighthouse will not load coverage for them.
## Address Trace (Instruction, or Basic Block)
Perhaps the most primitive coverage format, Lighthouse can also consume an 'absolute address' style trace:
```
0x14000419c
0x1400041a0
0x1400045dc
0x1400045e1
0x1400045e2
...
```
Note that these address traces can be either instruction addresses, or basic block addresses -- it does not matter. The main caveat is that addresses in the trace *must* match the address space within the disassembler database.
If an address cannot be mapped into a function in the disassembler database, Lighthouse will simply discard it.
## Custom Trace Formats
If you are adamant to use a completely custom coverage format, you can try to subclass Lighthouse's `CoverageFile` parser interface. Once complete, simply drop your parser into the `parsers` folder.
+35 -14
View File
@@ -3,6 +3,8 @@ from __future__ import print_function
import argparse
import json
import os
import signal
import sys
import frida
@@ -63,7 +65,7 @@ maps.map(function (e) {
var filtered_maps = new ModuleMap(function (m) {
if (whitelist.indexOf('all') >= 0) { return true; }
return whitelist.indexOf(m.name) >= 0;
return whitelist.some(item => m.name.toLowerCase().includes(item.toLowerCase()));
});
// This function takes a list of GumCompileEvents and converts it into a DRcov
@@ -164,11 +166,12 @@ Process.enumerateThreads({
});
"""
# These are global so we can easily access them from the frida callbacks
# It's important that bbs is a set, as we're going to depend on it's uniquing
# behavior for deduplication
# These are global so we can easily access them from the frida callbacks or
# signal handlers. It's important that bbs is a set, as we're going to depend
# on it's uniquing behavior for deduplication
modules = []
bbs = set([])
outfile = 'frida-cov.log'
# This converts the object frida sends which has string addresses into
# a python dict
@@ -224,12 +227,12 @@ def create_header(mods):
header_modules = '\n'.join(entries)
return header + header_modules + '\n'
return ("%s%s\n" % (header, header_modules)).encode("utf-8")
# take the recv'd basic blocks, finish the header, and append the coverage
def create_coverage(data):
bb_header = 'BB Table: %d bbs\n' % len(data)
return bb_header + ''.join(data)
bb_header = b'BB Table: %d bbs\n' % len(data)
return bb_header + b''.join(data)
def on_message(msg, data):
#print(msg)
@@ -240,7 +243,26 @@ def on_message(msg, data):
else:
populate_bbs(data)
def sigint(signo, frame):
print('[!] SIGINT, saving %d blocks to \'%s\'' % (len(bbs), outfile))
save_coverage()
print('[!] Done')
os._exit(1)
def save_coverage():
header = create_header(modules)
body = create_coverage(bbs)
with open(outfile, 'wb') as h:
h.write(header)
h.write(body)
def main():
global outfile
parser = argparse.ArgumentParser()
parser.add_argument('target',
help='target process name or pid',
@@ -260,6 +282,8 @@ def main():
args = parser.parse_args()
outfile = args.outfile
device = frida.get_device(args.device)
target = -1
@@ -276,6 +300,8 @@ def main():
'\'%s\' on device \'%s\'' % (args.target, device.id))
sys.exit(1)
signal.signal(signal.SIGINT, sigint)
whitelist_modules = ['all']
if len(args.whitelist_modules):
whitelist_modules = args.whitelist_modules
@@ -297,7 +323,7 @@ def main():
script.on('message', on_message)
script.load()
print('[*] Now collecting info, control-D to terminate....')
print('[*] Now collecting info, control-C or control-D to terminate....')
sys.stdin.read()
@@ -307,12 +333,7 @@ def main():
print('[+] Detached. Got %d basic blocks.' % len(bbs))
print('[*] Formatting coverage and saving...')
header = create_header(modules)
body = create_coverage(bbs)
with open(args.outfile, 'wb') as h:
h.write(header)
h.write(body)
save_coverage()
print('[!] Done')
+38 -26
View File
@@ -25,32 +25,29 @@ using unordered_map = std::tr1::unordered_map<K, V>;
}
// Tool's arguments.
static KNOB<string> KnobModuleWhitelist(KNOB_MODE_APPEND, "pintool", "w", "",
static KNOB<std::string> KnobModuleWhitelist(KNOB_MODE_APPEND, "pintool", "w", "",
"Add a module to the white list. If none is specified, everymodule is white-listed. Example: libTIFF.dylib");
static KNOB<string> KnobLogFile(KNOB_MODE_WRITEONCE, "pintool", "l", "trace.log",
static KNOB<std::string> KnobLogFile(KNOB_MODE_WRITEONCE, "pintool", "l", "trace.log",
"Name of the output file. If none is specified, trace.log is used.");
// Return the file/directory name of a path.
static string base_name(const string& path)
static std::string base_name(const std::string& path)
{
#if defined(TARGET_WINDOWS)
#define PATH_SEPARATOR "\\"
#else
#define PATH_SEPARATOR "/"
#endif
string::size_type idx = path.rfind(PATH_SEPARATOR);
string name = (idx == string::npos) ? path : path.substr(idx + 1);
std::string::size_type idx = path.rfind(PATH_SEPARATOR);
std::string name = (idx == std::string::npos) ? path : path.substr(idx + 1);
return name;
}
// Per thread data structure. This is mainly done to avoid locking.
// - Per-thread map of executed basic blocks, and their size.
struct ThreadData {
// Unique list of hit basic blocks.
pintool::unordered_set<ADDRINT> m_block_hit;
// Map basic a block address to its size.
pintool::unordered_map<ADDRINT, uint16_t> m_block_size;
pintool::unordered_map<ADDRINT, uint16_t> m_blocks;
};
class ToolContext {
@@ -129,7 +126,7 @@ static VOID OnThreadFini(THREADID tid, const CONTEXT* ctxt, INT32 c, VOID* v)
static VOID OnImageLoad(IMG img, VOID* v)
{
auto& context = *reinterpret_cast<ToolContext*>(v);
string img_name = base_name(IMG_Name(img));
std::string img_name = base_name(IMG_Name(img));
ADDRINT low = IMG_LowAddress(img);
ADDRINT high = IMG_HighAddress(img);
@@ -165,24 +162,37 @@ static VOID PIN_FAST_ANALYSIS_CALL OnBasicBlockHit(THREADID tid, ADDRINT addr, U
{
auto& context = *reinterpret_cast<ToolContext*>(v);
ThreadData* data = context.GetThreadLocalData(tid);
data->m_block_hit.insert(addr);
data->m_block_size[addr] = size;
data->m_blocks[addr] = size;
PIN_RemoveInstrumentationInRange(addr, addr);
}
// Trace hit event handler.
static VOID OnTrace(TRACE trace, VOID* v)
{
auto& context = *reinterpret_cast<ToolContext*>(v);
BBL bbl = TRACE_BblHead(trace);
ADDRINT addr = BBL_Address(bbl);
// Check if the address is inside a white-listed image.
if (!context.m_tracing_enabled || !context.m_images->isInterestingAddress(addr))
if (!context.m_tracing_enabled || !context.m_images->isInterestingAddress(TRACE_Address(trace)))
return;
// For each basic block in the trace.
for (; BBL_Valid(bbl); bbl = BBL_Next(bbl)) {
addr = BBL_Address(bbl);
auto tid = PIN_ThreadId();
ThreadData* data = context.GetThreadLocalData(tid);
// This trace is getting JIT'd, which implies the head must get executed.
auto bbl = TRACE_BblHead(trace);
auto addr = BBL_Address(bbl);
data->m_blocks[addr] = (uint16_t)BBL_Size(bbl);
// For each basic block in the trace...
for (bbl = BBL_Next(bbl); BBL_Valid(bbl); bbl = BBL_Next(bbl))
{
// Ignore blocks that have already been marked as executed in the past...
ADDRINT addr = BBL_Address(bbl);
if (data->m_blocks.find(addr) != data->m_blocks.end())
continue;
// Instrument blocks that have not yet been executed (at least... by this thread).
BBL_InsertCall(bbl, IPOINT_ANYWHERE, (AFUNPTR)OnBasicBlockHit,
IARG_FAST_ANALYSIS_CALL,
IARG_THREAD_ID,
@@ -191,6 +201,7 @@ static VOID OnTrace(TRACE trace, VOID* v)
IARG_PTR, v,
IARG_END);
}
}
// Program finish event handler.
@@ -218,7 +229,7 @@ static VOID OnFini(INT32 code, VOID* v)
// Count the global number of basic blocks.
size_t number_of_bbs = 0;
for (const auto& data : context.m_terminated_threads) {
number_of_bbs += data->m_block_hit.size();
number_of_bbs += data->m_blocks.size();
}
context.m_trace->write_string("BB Table: %u bbs\n", number_of_bbs);
@@ -232,7 +243,8 @@ static VOID OnFini(INT32 code, VOID* v)
drcov_bb tmp;
for (const auto& data : context.m_terminated_threads) {
for (const auto& address : data->m_block_hit) {
for (const auto& block : data->m_blocks) {
auto address = block.first;
auto it = std::find_if(context.m_loaded_images.begin(), context.m_loaded_images.end(), [&address](const LoadedImage& image) {
return address >= image.low_ && address < image.high_;
});
@@ -242,7 +254,7 @@ static VOID OnFini(INT32 code, VOID* v)
tmp.id = (uint16_t)std::distance(context.m_loaded_images.begin(), it);
tmp.start = (uint32_t)(address - it->low_);
tmp.size = data->m_block_size[address];
tmp.size = data->m_blocks[address];
context.m_trace->write_binary(&tmp, sizeof(tmp));
}
@@ -251,14 +263,14 @@ static VOID OnFini(INT32 code, VOID* v)
int main(int argc, char* argv[])
{
cout << "CodeCoverage tool by Agustin Gianni (agustingianni@gmail.com)" << endl;
std::cout << "CodeCoverage tool by Agustin Gianni (agustingianni@gmail.com)" << std::endl;
// Initialize symbol processing
PIN_InitSymbols();
// Initialize PIN.
if (PIN_Init(argc, argv)) {
cerr << "Error initializing PIN, PIN_Init failed!" << endl;
std::cerr << "Error initializing PIN, PIN_Init failed!" << std::endl;
return -1;
}
@@ -268,7 +280,7 @@ int main(int argc, char* argv[])
// Create a an image manager that keeps track of the loaded/unloaded images.
context->m_images = new ImageManager();
for (unsigned i = 0; i < KnobModuleWhitelist.NumberOfValues(); ++i) {
cout << "White-listing image: " << KnobModuleWhitelist.Value(i) << endl;
std::cout << "White-listing image: " << KnobModuleWhitelist.Value(i) << std::endl;
context->m_images->addWhiteListedImage(KnobModuleWhitelist.Value(i));
// We will only enable tracing when any of the whitelisted images gets loaded.
@@ -276,7 +288,7 @@ int main(int argc, char* argv[])
}
// Create a trace file.
cout << "Logging code coverage information to: " << KnobLogFile.ValueString() << endl;
std::cout << "Logging code coverage information to: " << KnobLogFile.ValueString() << std::endl;
context->m_trace = new TraceFile(KnobLogFile.ValueString());
// Handlers for thread creation and destruction.
+2 -2
View File
@@ -11,7 +11,7 @@ ImageManager::~ImageManager()
PIN_RWMutexFini(&images_lock);
}
VOID ImageManager::addImage(string image_name, ADDRINT lo_addr,
VOID ImageManager::addImage(std::string image_name, ADDRINT lo_addr,
ADDRINT hi_addr)
{
PIN_RWMutexWriteLock(&images_lock);
@@ -25,7 +25,7 @@ VOID ImageManager::removeImage(ADDRINT low)
{
PIN_RWMutexWriteLock(&images_lock);
{
set<LoadedImage>::iterator i = images.find(LoadedImage("", low));
std::set<LoadedImage>::iterator i = images.find(LoadedImage("", low));
if (i != images.end()) {
LoadedImage li = *i;
images.erase(i);
+1 -1
View File
@@ -7,4 +7,4 @@ TOOL_ROOTS := CodeCoverage
$(OBJDIR)CodeCoverage$(PINTOOL_SUFFIX): $(OBJDIR)CodeCoverage$(OBJ_SUFFIX) $(OBJDIR)ImageManager$(OBJ_SUFFIX)
$(LINKER) $(TOOL_LDFLAGS) $(LINK_EXE)$@ $^ $(TOOL_LPATHS) $(TOOL_LIBS)
include $(TOOLS_ROOT)/Config/makefile.default.rules
include $(TOOLS_ROOT)/Config/makefile.default.rules
+15 -6
View File
@@ -1,6 +1,6 @@
# CodeCoverage Pintool
The `CodeCoverage` pintool runs ontop of the [Intel Pin](https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool) DBI framework and collects code coverage data in a log format compatible with [Lighthouse](https://github.com/gaasedelen/lighthouse). The log produced by this pintool emulates that of [drcov](http://dynamorio.org/docs/page_drcov.html) as shipped with [DynamoRIO](http://www.dynamorio.org).
The `CodeCoverage` pintool runs ontop of the [Intel Pin](https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool) DBI framework and collects code coverage data in a log format compatible with [Lighthouse](https://github.com/gaasedelen/lighthouse). The log produced by this pintool emulates that of [drcov](http://dynamorio.org/docs/page_drcov.html) as shipped with [DynamoRIO](http://www.dynamorio.org).
This pintool is labeled only as a prototype.
@@ -12,7 +12,7 @@ Follow the build instructions below for your respective platform.
## Building for MacOS or Linux
On MacOS or Liunux, one can compile the pintool using the following commands.
On MacOS or Linux, one can compile the pintool using the following commands.
```
# Location of this repo / pintool source
@@ -22,6 +22,7 @@ cd ~/lighthouse/coverage/pin
export PIN_ROOT=~/pin
export PATH=$PATH:$PIN_ROOT
make
make TARGET=ia32
```
The resulting binaries will be placed inside a directory whose name depends on the arch/platform/build type.
@@ -38,7 +39,11 @@ Launch a command prompt and build the pintool with the following commands.
### 32bit Pintool
```
"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x86
REM If you are on VS 2022 or so you can run this line:
"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" x86
REM VS 2015 or so you can run this line instead:
REM "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x86
REM Location of this repo / pintool source
cd C:\Users\user\lighthouse\coverage\pin
@@ -52,7 +57,11 @@ build-x86.bat
### 64bit Pintool
```
"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x86_amd64
REM If you are on VS 2022 or so you can run this line:
"C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvarsall.bat" x86_amd64
REM VS 2015 or so you can run this line instead:
REM "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x86_amd64
REM Location of this repo / pintool source
cd C:\Users\user\lighthouse\coverage\pin
@@ -63,12 +72,12 @@ set PATH=%PATH%;%PIN_ROOT%
build-x64.bat
```
The resulting binaries will be labaled based on their architecture (eg, 64 is the 64bit pintool).
The resulting binaries will be labeled based on their architecture (eg, 64 is the 64bit pintool).
* CodeCoverage.dll
* CodeCoverage64.dll
Compiling a pintool on Windows can be more arduous. Because of this, we have provided compiled binaries for Windows on the [releases](https://github.com/gaasedelen/lighthouse/releases/tag/v0.7.0) page.
Compiling a pintool on Windows can be more arduous. Because of this, we have provided compiled binaries for Windows on the [releases](https://github.com/gaasedelen/lighthouse/releases) page. Please be sure to use the pintool that matches your version of Pin.
# Usage
+6 -7
View File
@@ -2,11 +2,12 @@
cls
cl ^
/c ^
/c /Fo /nologo /EHa- /EHs- /GR- /GS- /Gd /Gm- /Gy /MD /O2 /Oi- /Oy- /TP /W3 /WX- /Zc:forScope /Zc:inline /Zc:wchar_t /wd4316 /wd4530 /fp:strict ^
/DTARGET_IA32E /DHOST_IA32E /DTARGET_WINDOWS /DWIN32 /D__PIN__=1 /DPIN_CRT=1 /D_STLP_IMPORT_IOSTREAMS /D__LP64__ ^
/I"%PIN_ROOT%\extras\xed-intel64\include\xed" ^
/I%PIN_ROOT%\source\include\pin ^
/I%PIN_ROOT%\source\include\pin\gen ^
/I%PIN_ROOT%\source\tools\InstLib ^
/I"%PIN_ROOT%\extras\xed-intel64\include\xed" ^
/I%PIN_ROOT%\extras\components\include ^
/I%PIN_ROOT%\extras\stlport\include ^
/I%PIN_ROOT%\extras ^
@@ -16,9 +17,6 @@ cl ^
/I"%PIN_ROOT%\extras\crt\include\arch-x86_64" ^
/I%PIN_ROOT%\extras\crt\include\kernel\uapi ^
/I"%PIN_ROOT%\extras\crt\include\kernel\uapi\asm-x86" ^
/nologo /W3 /WX- /O2 ^
/D TARGET_IA32E /D HOST_IA32E /D TARGET_WINDOWS /D WIN32 /D __PIN__=1 /D PIN_CRT=1 /D __LP64__ ^
/Gm- /MT /GS- /Gy /fp:precise /Zc:wchar_t /Zc:forScope /Zc:inline /GR- /Gd /TP /wd4530 /GR- /GS- /EHs- /EHa- /FP:strict /Oi- ^
/FIinclude/msvc_compat.h CodeCoverage.cpp ImageManager.cpp ImageManager.h TraceFile.h
link ^
@@ -29,7 +27,7 @@ link ^
/LIBPATH:%PIN_ROOT%\intel64\lib ^
/LIBPATH:"%PIN_ROOT%\intel64\lib-ext" ^
/LIBPATH:"%PIN_ROOT%\extras\xed-intel64\lib" ^
/LIBPATH:%PIN_ROOT%\intel64\runtime\pincrt pin.lib xed.lib pinvm.lib kernel32.lib "stlport-static.lib" "m-static.lib" "c-static.lib" "os-apis.lib" "ntdll-64.lib" crtbeginS.obj ^
/LIBPATH:%PIN_ROOT%\intel64\runtime\pincrt pin.lib xed.lib pinipc.lib pincrt.lib kernel32.lib crtbeginS.obj ^
/NODEFAULTLIB ^
/MANIFEST:NO ^
/OPT:NOREF ^
@@ -45,6 +43,7 @@ link ^
/ignore:4049 ^
/ignore:4210 ^
/ignore:4217 ^
/ignore:4281 ^
/DLL CodeCoverage.obj ImageManager.obj
del *.obj *.pdb *.exp *.lib
del *.obj *.pdb *.exp *.lib
+4 -3
View File
@@ -2,8 +2,8 @@
cls
cl ^
/c /EHa- /EHs- /GR- /GS- /Gd /Gm- /Gy /MT /O2 /Oi- /Oy- /TP /W3 /WX- /Zc:forScope /Zc:inline /Zc:wchar_t /fp:precise /nologo /wd4316 ^
/DTARGET_IA32 /DHOST_IA32 /DTARGET_WINDOWS /DBIGARRAY_MULTIPLIER=1 /DWIN32 /D__PIN__=1 /DPIN_CRT=1 /D__i386__ ^
/c /Fo /nologo /EHa- /EHs- /GR- /GS- /Gd /Gm- /Gy /MD /O2 /Oi- /Oy- /TP /W3 /WX- /Zc:forScope /Zc:inline /Zc:wchar_t /wd4316 /wd4530 /fp:precise ^
/DTARGET_IA32 /DHOST_IA32 /DTARGET_WINDOWS /DWIN32 /D__PIN__=1 /DPIN_CRT=1 /D_STLP_IMPORT_IOSTREAMS /D__i386__ ^
/I"%PIN_ROOT%\extras\xed-ia32\include\xed" ^
/I%PIN_ROOT%\source\include\pin ^
/I%PIN_ROOT%\source\include\pin\gen ^
@@ -28,7 +28,7 @@ link ^
/LIBPATH:%PIN_ROOT%\ia32\lib ^
/LIBPATH:"%PIN_ROOT%\ia32\lib-ext" ^
/LIBPATH:"%PIN_ROOT%\extras\xed-ia32\lib" ^
/LIBPATH:%PIN_ROOT%\ia32\runtime\pincrt pin.lib xed.lib pinvm.lib kernel32.lib "stlport-static.lib" "m-static.lib" "c-static.lib" "os-apis.lib" "ntdll-32.lib" crtbeginS.obj ^
/LIBPATH:%PIN_ROOT%\ia32\runtime\pincrt pin.lib xed.lib pinipc.lib pincrt.lib kernel32.lib crtbeginS.obj ^
/NODEFAULTLIB ^
/MANIFEST:NO ^
/OPT:NOREF ^
@@ -44,6 +44,7 @@ link ^
/ignore:4049 ^
/ignore:4210 ^
/ignore:4217 ^
/ignore:4281 ^
/DLL CodeCoverage.obj ImageManager.obj
del *.obj *.pdb *.exp *.lib
+9
View File
@@ -0,0 +1,9 @@
REM - Clean up an existing or past 'test session'
taskkill /F /IM "binaryninja.exe"
timeout 1
REM del "..\..\testcase\*.id0"
REM del "..\..\testcase\*.id1"
REM del "..\..\testcase\*.id2"
REM del "..\..\testcase\*.nam"
REM del "..\..\testcase\*.til"
REM del "..\..\testcase\*.$$$"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_BINJA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\BinaryNinja\binaryninja.exe" "..\..\testcase\boombox.bndb"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_BINJA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\BinaryNinja\binaryninja.exe" "..\..\testcase\ida74\ida64.bndb"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_BINJA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\BinaryNinja\binaryninja.exe"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of Binja
call close_BINJA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Binary Ninja\plugins\.#lighthouse_plugin.py"
REM - Launch a new Binja session
start "" "C:\tools\disassemblers\BinaryNinja_Personal\binaryninja.exe" "..\..\testcase\boombox.bndb"
+1
View File
@@ -1,3 +1,4 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_IDA.bat
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\IDA 7.1\ida.exe" "..\..\testcase\idaq71.idb"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\IDA 7.4\ida64.exe" "..\..\testcase\boombox74.i64"
+19
View File
@@ -0,0 +1,19 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
start "" "C:\tools\disassemblers\IDA 7.4\ida64.exe" "..\..\testcase\ida74\ida64.exe.i64"
REM start "" "C:\tools\disassemblers\IDA 7.4\ida64.exe" "C:\Users\user\Desktop\JavaScriptCore_13.4.i64"
+18
View File
@@ -0,0 +1,18 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\*lighthouse_plugin.py"
rmdir "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\"
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\.#lighthouse_plugin.py"
REM - Launch a new IDA session
"C:\tools\disassemblers\IDA 7.0\ida64.exe" "-B" "..\..\testcase\boombox.exe"
+1
View File
@@ -1,3 +1,4 @@
set LIGHTHOUSE_LOGGING=1
REM - Close any running instances of IDA
call close_IDA.bat
-17
View File
@@ -1,17 +0,0 @@
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\tools\disassemblers\IDA 6.8\plugins\*lighthouse_plugin.py"
rmdir "C:\tools\disassemblers\IDA 6.8\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\tools\disassemblers\IDA 6.8\plugins\"
del /F /Q "C:\tools\disassemblers\IDA 6.8\plugins\.#lighthouse_plugin.py"
REM - Relaunch two IDA sessions
start "" "C:\tools\disassemblers\IDA 6.8\idaq64.exe" "..\..\testcase\boombox.i64"
-17
View File
@@ -1,17 +0,0 @@
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\tools\disassemblers\IDA 6.8\plugins\*lighthouse_plugin.py"
rmdir "C:\tools\disassemblers\IDA 6.8\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\tools\disassemblers\IDA 6.8\plugins\"
del /F /Q "C:\tools\disassemblers\IDA 6.8\plugins\.#lighthouse_plugin.py"
REM - Relaunch two IDA sessions
start "" "C:\tools\disassemblers\IDA 6.8\idaq.exe" "..\..\testcase\idaq.idb"
-17
View File
@@ -1,17 +0,0 @@
REM - Close any running instances of IDA
call close_IDA.bat
REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
del /F /Q "C:\tools\disassemblers\IDA 6.95\plugins\*lighthouse_plugin.py"
rmdir "C:\tools\disassemblers\IDA 6.95\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
xcopy /s/y "..\plugin\*" "C:\tools\disassemblers\IDA 6.95\plugins\"
del /F /Q "C:\tools\disassemblers\IDA 6.95\plugins\.#lighthouse_plugin.py"
REM - Relaunch two IDA sessions
start "" "C:\tools\disassemblers\IDA 6.95\idaq64.exe" "..\..\testcase\boombox95.i64"
+14
View File
@@ -0,0 +1,14 @@
import os
import binaryninja
target = os.path.abspath(os.path.join(
os.path.dirname(os.path.abspath(__file__)),
"..",
"testcase",
"boombox.exe"
))
print "Opening '%s' headlessly..." % target
x = binaryninja.BinaryViewType["PE"].open(target)
x.update_analysis_and_wait()
print "DONE!"
-711
View File
@@ -1,711 +0,0 @@
import logging
import weakref
import itertools
import collections
from lighthouse.util import *
from lighthouse.palette import compute_color_on_gradiant
from lighthouse.painting import *
from lighthouse.metadata import DatabaseMetadata
logger = logging.getLogger("Lighthouse.Coverage")
#------------------------------------------------------------------------------
# Coverage / Data Mapping
#------------------------------------------------------------------------------
#
# Raw runtime data (eg, coverage or trace) passed into the director is
# stored internally in DatabaseCoverage objects. A DatabaseCoverage
# object can be roughly equated to a single loaded runtime data file.
#
# DatabaseCoverage objects simply map their raw runtime data to the
# database using the lifted metadata described in metadata.py. The
# 'mapping' objects detailed in this file are effectively produced as
# a thin layer on top of cached metadata.
#
# As mapping objects retain the raw runtime data internally, we are
# able to rebuild mappings should the database/metadata get updated or
# refreshed by the user.
#
#------------------------------------------------------------------------------
# Database Coverage / Data Mapping
#------------------------------------------------------------------------------
class DatabaseCoverage(object):
"""
Database level coverage mapping.
"""
def __init__(self, data, palette):
# color palette
self.palette = palette
#
# the abstract above gives some background to the design employed by
# Lighthouse to map coverage data to the database.
#
# coverage objects such as this (DatabaseCoverage) are basically
# glorified mappings of coverage / runtime data on top of their
# metadata counterparts. A coverage object by itself is mostly useless
# without its corresponding metadata object.
#
# here we simply populate self._metadata with a stub metadata object,
# but at runtime we will inject a fully collected DatabaseMetadata
# object as maintained by the director.
#
self._metadata = DatabaseMetadata()
#
# the hitmap effectively holds the raw coverage data. the name
# should speak for itself, but a hitmap will track the number of
# times a given address / instruction was executed.
#
# Eg:
# hitmap =
# {
# 0x8040100: 1,
# 0x8040102: 1,
# 0x8040105: 3,
# 0x8040108: 3, # 0x8040108 was executed 3 times...
# 0x804010a: 3,
# 0x804010f: 1,
# ...
# }
#
# this structure gives us an interesting degree of flexibility with
# regard to what data sources we can consue (inst trace, coverage, etc)
# and ways we can leverage said data (visualize coverage, heatmaps)
#
self._hitmap = build_hitmap(data)
#
# the coverage hash is a simple hash of the coverage bitmap/mask.
# it is primarily used by the director as a means of quickly comparing
# coverage, and predicting outputs of logical / arithmetic operations.
#
# the hash will need to be updated via _update_coverage_hash() anytime
# the hitmap is modified or changed internally. we cache a concrete
# result of the coverage hash because computing the hash on demand can
# be expensive in terms of time.
#
# see the usage of 'coverage_hash' in director.py for more info
#
self.coverage_hash = 0
self._update_coverage_hash()
#
# Lighthouse will only compute coverage for code within defined
# functions. therefore, all coverage / runtime data will get bucketed
# into its appropriate NodeCoverage object (eg, a basic block) or it
# will be considered 'unmapped'
#
# starting out, all coverage data is marked as unmapped
#
self._unmapped_data = set(self._hitmap.keys())
self._unmapped_data.add(idaapi.BADADDR)
#
# self._map_coverage is responsible for mapping coverage data to the
# database (via the lifted 'DatabaseMetadata' cache). The mapping
# process will yield NodeCoverage & FunctionCoverage objects.
#
# NodeCoverage objects represent coverage at the node (basic block)
# level and are owned by their respective FunctionCoverage objects.
#
# FunctionCoverage represent coverage at the function level by
# leveraging their respective NodeCoverage children.
#
self.nodes = {}
self.functions = {}
self.instruction_percent = 0.0
#
# we instantiate a single weakref of ourself (the DatbaseMapping
# object) such that we can distribute it to the children we create
# without having to repeatedly instantiate new ones.
#
self._weak_self = weakref.proxy(self)
#--------------------------------------------------------------------------
# Propertiens
#--------------------------------------------------------------------------
@property
def data(self):
"""
The data (a hitmap) used by this mapping.
"""
return self._hitmap
@property
def coverage(self):
"""
The instruction-level coverage bitmap/mask of this mapping.
"""
return self._hitmap.viewkeys()
#--------------------------------------------------------------------------
# Metadata Population
#--------------------------------------------------------------------------
def update_metadata(self, metadata, delta=None):
"""
Install a new databasee metadata object.
"""
# install the new metadata
self._metadata = weakref.proxy(metadata)
self.unmap_all()
def refresh(self):
"""
Refresh the mapping of our coverage data to the database metadata.
"""
# rebuild our coverage mapping
dirty_nodes, dirty_functions = self._map_coverage()
# bake our coverage map
self._finalize(dirty_nodes, dirty_functions)
# update the coverage hash incase the hitmap changed
self._update_coverage_hash()
# dump the unmappable coverage data
#self.dump_unmapped()
def refresh_nodes(self):
"""
Special fast-refresh of nodes as used in the un-painting process.
"""
dirty_nodes = self._map_nodes()
self._finalize_nodes(dirty_nodes)
def _finalize(self, dirty_nodes, dirty_functions):
"""
Finalize coverage objects for use.
"""
self._finalize_nodes(dirty_nodes)
self._finalize_functions(dirty_functions)
self._finalize_instruction_percent()
def _finalize_nodes(self, dirty_nodes):
"""
Finalize coverage nodes for use.
"""
for node_coverage in dirty_nodes.itervalues():
node_coverage.finalize()
def _finalize_functions(self, dirty_functions):
"""
Finalize coverage nodes for use.
"""
for function_coverage in dirty_functions.itervalues():
function_coverage.finalize()
def _finalize_instruction_percent(self):
"""
Finalize the database coverage % by instructions executed in all defined functions.
"""
# sum all the instructions in the database metadata
total = sum(f.instruction_count for f in self._metadata.functions.itervalues())
if not total:
self.instruction_percent = 0.0
return
# sum all the instructions executed by the coverage
executed = sum(f.instructions_executed for f in self.functions.itervalues())
# return the average function coverage % aka 'the database coverage %'
self.instruction_percent = float(executed) / total
#--------------------------------------------------------------------------
# Data Operations
#--------------------------------------------------------------------------
def add_data(self, data, update=True):
"""
Add runtime data to this mapping.
"""
# add the given runtime data to our data source
for address, hit_count in data.iteritems():
self._hitmap[address] += hit_count
# do not update other internal structures if requested
if not update:
return
# update the coverage hash incase the hitmap changed
self._update_coverage_hash()
# mark these touched addresses as dirty
self._unmapped_data |= data.viewkeys()
def add_addresses(self, addresses, update=True):
"""
Add a list of instruction addresses to this mapping (eg, a trace).
"""
# increment the hit count for an address
for address in addresses:
self._hitmap[address] += 1
# do not update other internal structures if requested
if not update:
return
# update the coverage hash incase the hitmap changed
self._update_coverage_hash()
# mark these touched addresses as dirty
self._unmapped_data |= set(addresses)
def subtract_data(self, data):
"""
Subtract runtime data from this mapping.
"""
# subtract the given runtime data from our data source
for address, hit_count in data.iteritems():
self._hitmap[address] -= hit_count
#assert self._hitmap[address] >= 0
#
# if there is no longer any hits for this address, delete its
# entry from the source_data dictonary. we don't want its entry to
# hang around because we use self._hitmap.viewkeys() as a
# coverage bitmap/mask
#
if not self._hitmap[address]:
del self._hitmap[address]
# update the coverage hash incase the hitmap changed
self._update_coverage_hash()
#
# unmap everything because a complete re-mapping is easier with the
# current implementation of things
#
self.unmap_all()
def mask_data(self, coverage_mask):
"""
Mask the hitmap data against a given coverage mask.
Returns a new DatabaseCoverage containing the masked hitmap.
"""
composite_data = collections.defaultdict(int)
# preserve only hitmap data that matches the coverage mask
for address in coverage_mask:
composite_data[address] = self._hitmap[address]
# done, return a new DatabaseCoverage masked with the given coverage
return DatabaseCoverage(composite_data, self.palette)
def _update_coverage_hash(self):
"""
Update the hash of the coverage mask.
"""
if self._hitmap:
self.coverage_hash = hash(frozenset(self._hitmap.viewkeys()))
else:
self.coverage_hash = 0
#--------------------------------------------------------------------------
# Coverage Mapping
#--------------------------------------------------------------------------
def _map_coverage(self):
"""
Map loaded coverage data to the given database metadata.
"""
# re-map any unmapped coverage to nodes
dirty_nodes = self._map_nodes()
# re-map nodes to functions
dirty_functions = self._map_functions(dirty_nodes)
# return the modified objects
return (dirty_nodes, dirty_functions)
def _map_nodes(self):
"""
Map loaded runtime data to database defined nodes (basic blocks).
"""
dirty_nodes = {}
addresses_to_map = collections.deque(sorted(self._unmapped_data))
#
# This while loop is the core of our coverage mapping process.
#
# The '_unmapped_data' list is consumed by this loop, mapping
# any unmapped runtime data maintained by this DatabaseCoverage
# to the given database metadata.
#
# It should be noted that the rest of the database coverage
# mapping (eg functions) gets built ontop of the mappings we build
# for nodes here using the more or less raw/recycled runtime data.
#
while addresses_to_map:
# get the next address to map
address = addresses_to_map.popleft()
# get the node (basic block) that contains this address
node_metadata = self._metadata.get_node(address)
#
# failed to locate the node (basic block) for this address.
# this address must not fall inside of a defined function...
#
if not node_metadata:
continue
#
# we found applicable node metadata for this address, now try
# to find the mapping object for this node address
#
if node_metadata.address in self.nodes:
node_coverage = self.nodes[node_metadata.address]
#
# failed to locate a node coverage object, looks like this is
# the first time we have identiied coverage for this node.
# create a coverage node object and use it now.
#
else:
node_coverage = NodeCoverage(node_metadata.address, self._weak_self)
self.nodes[node_metadata.address] = node_coverage
# compute the basic block end now to reduce overhead in the loop below
node_end = node_metadata.address + node_metadata.size
#
# the loop below can be thought of almost as an inlined fast-path
# where we expect the next several addresses to belong to the same
# node (basic block).
#
# with the assumption of linear program execution, we can reduce
# the heavier overhead of all the lookup code above by simply
# checking if the next address in the queue (addresses_to_map)
# falls into the same / current node (basic block).
#
# we can simply re-use the current node and its coverage object
# until the next address to be processed falls outside our scope
#
while 1:
#
# map the hitmap data for the current address (an instruction)
# to this node mapping and mark the instruction as mapped by
# discarding its address from the unmapped data list
#
if address in node_metadata.instructions:
node_coverage.executed_instructions[address] = self._hitmap[address]
self._unmapped_data.discard(address)
# get the next address to attempt mapping on
address = addresses_to_map.popleft()
#
# if the next address is not in this node, it's time break out
# of this loop and send it through the full node lookup path
#
if not (node_metadata.address <= address < node_end):
addresses_to_map.appendleft(address)
break
#
# the next address to be mapped DOES fall within our current
# node, loop back around in the fast-path and map it
#
# ...
# since we updated this node, ensure we're tracking it as dirty
dirty_nodes[node_metadata.address] = node_coverage
# done
return dirty_nodes
def _map_functions(self, dirty_nodes):
"""
Map loaded coverage data to database defined functions.
"""
dirty_functions = {}
#
# thanks to the _map_nodes function, we now have a repository of
# node coverage objects that are considered 'dirty' and can be used
# precisely guide the generation of our function level coverage
#
for node_coverage in dirty_nodes.itervalues():
#
# using the node_coverage object, we retrieve its underlying
# metadata so that we can perform a reverse lookup of the fun
#
function_metadata = self._metadata.nodes[node_coverage.address].function
#
# now we can add this node to its respective function level
# coverage mapping
#
function_coverage = self.functions.get(function_metadata.address, None)
#
# if we failed to locate a function coverage object, it means
# that this is the first time we have identified coverage for this
# function. create a new coverage function object and use it now.
#
if not function_coverage:
function_coverage = FunctionCoverage(function_metadata.address, self._weak_self)
self.functions[function_metadata.address] = function_coverage
# mark this node as executed in the function level mappping
function_coverage.mark_node(node_coverage)
dirty_functions[function_metadata.address] = function_coverage
# end of nodes loop
# done
return dirty_functions
def unmap_all(self):
"""
Unmap all mapped data.
"""
self._unmapped_data = set(self._hitmap.keys())
self._unmapped_data.add(idaapi.BADADDR)
self.nodes = {}
self.functions = {}
def _unmap_delta(self, delta):
"""
Unmap node & function coverage affected by the metadata delta.
The metadata delta tells us exactly which parts of the database
changed since our last coverage mapping. This function surgically
unmaps the pieces of our coverage that may now be stale.
This enables us to recompute only what is necessary upon refresh.
"""
self._unmap_nodes(itertools.chain(delta.nodes_removed, delta.nodes_modified))
self._unmap_functions(delta.functions_removed)
def _unmap_nodes(self, node_addresses):
"""
Unmap any data associated with a given list of node addresses.
"""
#
# using the metdata delta as a guide, we loop through all the nodes it
# has noted as either modified, or deleted. it is in our best interest
# unmap any of these dirty (stale) node addresses in OUR coverage
# mapping so we can selectively regenerate their coverage later.
#
for node_address in node_addresses:
#
# if there's no coverage for this node, then we have nothing to do.
# continue on to the next dirty node address
#
node_coverage = self.nodes.pop(node_address, None)
if not node_coverage:
continue
# the node was found, unmap any of its tracked coverage blocks
self._unmapped_data.update(
node_coverage.executed_instructions.viewkeys()
)
def _unmap_functions(self, function_addresses):
"""
Unmap any data associated with a given list of function addresses.
"""
for function_address in function_addresses:
self.functions.pop(function_address, None)
#--------------------------------------------------------------------------
# Debug
#--------------------------------------------------------------------------
def dump_unmapped(self):
"""
Dump the unmapped coverage data.
"""
lmsg("Unmapped Coverage:")
for address in self._unmapped_data:
lmsg(" * 0x%X" % address)
#------------------------------------------------------------------------------
# Function Level Coverage
#------------------------------------------------------------------------------
class FunctionCoverage(object):
"""
Function level coverage mapping.
"""
def __init__(self, function_address, database=None):
self._database = database
self.address = function_address
# addresses of nodes executed
self.nodes = {}
# compute the # of instructions executed by this function's coverage
self.instruction_percent = 0.0
self.node_percent = 0.0
# baked colors
if function_address == idaapi.BADADDR:
self.coverage_color = QtGui.QColor(30, 30, 30)
else:
self.coverage_color = 0
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def hits(self):
"""
The cumulative instruction executions in this function.
"""
return sum(x.hits for x in self.nodes.itervalues())
@property
def nodes_executed(self):
"""
The number of nodes executed in this function.
"""
return len(self.nodes)
@property
def instructions_executed(self):
"""
The number of unique instructions executed in this function.
"""
return sum(x.instructions_executed for x in self.nodes.itervalues())
@property
def instructions(self):
"""
The instruction addresses in this function.
"""
return set([ea for node in self.nodes.itervalues() for ea in node.executed_instructions.keys()])
#--------------------------------------------------------------------------
# Controls
#--------------------------------------------------------------------------
def mark_node(self, node_coverage):
"""
Mark the given node address as executed.
"""
self.nodes[node_coverage.address] = node_coverage
def finalize(self):
"""
Finalize coverage data for use.
"""
function_metadata = self._database._metadata.functions[self.address]
# compute the % of nodes executed
self.node_percent = float(self.nodes_executed) / function_metadata.node_count
# compute the % of instructions executed
self.instruction_percent = \
float(self.instructions_executed) / function_metadata.instruction_count
# the sum of node executions in this function
node_sum = sum(x.executions for x in self.nodes.itervalues())
# the estimated number of executions this function has experienced
self.executions = float(node_sum) / function_metadata.node_count
# bake colors
self.coverage_color = compute_color_on_gradiant(
self.instruction_percent,
self._database.palette.coverage_bad,
self._database.palette.coverage_good
)
#------------------------------------------------------------------------------
# Node Coverage / Data Mapping
#------------------------------------------------------------------------------
class NodeCoverage(object):
"""
Node (basic block) level coverage mapping.
"""
def __init__(self, node_address, database=None):
self._database = database
self.address = node_address
self.executed_instructions = {}
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def hits(self):
"""
The cumulative instruction executions in this node.
"""
return sum(self.executed_instructions.itervalues())
@property
def instructions_executed(self):
"""
The number of unique instructions executed in this node.
"""
return len(self.executed_instructions)
#--------------------------------------------------------------------------
# Controls
#--------------------------------------------------------------------------
def finalize(self):
"""
Finalize the coverage metrics for faster access.
"""
palette = self._database.palette
node_metadata = self._database._metadata.nodes[self.address]
# the estimated number of executions this node has experienced.
self.executions = float(self.hits) / node_metadata.instruction_count
# bake colors
self.coverage_color = palette.ida_coverage
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
-689
View File
@@ -1,689 +0,0 @@
import time
import bisect
import logging
import threading
import idc
import idaapi
from lighthouse.util import *
from lighthouse.util.ida import *
logger = logging.getLogger("Lighthouse.Painting")
class CoveragePainter(object):
"""
Asynchronous database painter.
"""
def __init__(self, director, palette):
# color palette
self.palette = palette
self._director = director
#----------------------------------------------------------------------
# Painted State
#----------------------------------------------------------------------
#
# the coverage painter maintains its own internal record of what
# instruction addresses and graph nodes it has painted.
#
self._painted_nodes = set()
self._painted_instructions = set()
#----------------------------------------------------------------------
# HexRays Hooking
#----------------------------------------------------------------------
#
# we attempt to hook hexrays the *first* time a repaint request is
# made. the assumption being that IDA is fully loaded and if hexrays is
# present, it will definitely be available (for hooking) by this time
#
self._attempted_hook = False
#----------------------------------------------------------------------
# Async
#----------------------------------------------------------------------
#
# to communicate with the asynchronous painting thread, we send a
# a message via the thread event to signal a new paint request, and
# use the repaint_requested bool to interrupt a running paint request.
#
self._action_complete = threading.Event()
self._repaint_request = threading.Event()
self._repaint_requested = False
self._end_threads = False
#
# asynchronous database painting thread
#
self._painting_worker = threading.Thread(
target=self._async_database_painter,
name="DatabasePainter"
)
self._painting_worker.start()
#----------------------------------------------------------------------
# Callbacks
#----------------------------------------------------------------------
# register for cues from the director
self._director.coverage_switched(self.repaint)
self._director.coverage_modified(self.repaint)
def terminate(self):
"""
Cleanup & terminate the painter.
"""
self._end_threads = True
self._repaint_requested = True
self._repaint_request.set()
self._painting_worker.join()
#--------------------------------------------------------------------------
# Initialization
#--------------------------------------------------------------------------
def _init_hexrays_hooks(self):
"""
Install Hex-Rrays hooks (when available).
"""
result = False
if idaapi.init_hexrays_plugin():
logger.debug("HexRays present, installing hooks...")
result = idaapi.install_hexrays_callback(self._hxe_callback)
logger.debug("HexRays hooked: %r" % result)
#------------------------------------------------------------------------------
# Painting
#------------------------------------------------------------------------------
def repaint(self):
"""
Paint coverage defined by the current database mappings.
"""
# attempt to hook hexrays *once*
if not self._attempted_hook:
self._init_hexrays_hooks()
self._attempted_hook = True
# signal the painting thread that it's time to repaint coverage
self._repaint_requested = True
self._repaint_request.set()
#------------------------------------------------------------------------------
# Painting - Instructions / Items (Lines)
#------------------------------------------------------------------------------
def paint_instructions(self, instructions):
"""
Paint instruction level coverage defined by the current database mapping.
"""
for address in instructions:
idaapi.set_item_color(address, self.palette.ida_coverage)
self._painted_instructions.add(address)
def clear_instructions(self, instructions):
"""
Clear paint from the given instructions.
"""
for address in instructions:
idaapi.set_item_color(address, idc.DEFCOLOR)
self._painted_instructions.discard(address)
@idawrite_async
def _paint_instructions(self, instructions):
"""
Internal routine to force called action to the main thread.
"""
time.sleep(0) # HACK: workaround for the idapython idaapi.MFF_NOWAIT bug
self.paint_instructions(instructions)
self._action_complete.set()
time.sleep(0) # HACK: workaround for the idapython idaapi.MFF_NOWAIT bug
@idawrite_async
def _clear_instructions(self, instructions):
"""
Internal routine to force called action to the main thread.
"""
time.sleep(0) # HACK: workaround for the idapython idaapi.MFF_NOWAIT bug
self.clear_instructions(instructions)
self._action_complete.set()
time.sleep(0) # HACK: workaround for the idapython idaapi.MFF_NOWAIT bug
#------------------------------------------------------------------------------
# Painting - Nodes (Basic Blocks)
#------------------------------------------------------------------------------
def paint_nodes(self, nodes_coverage):
"""
Paint node level coverage defined by the current database mappings.
"""
# create a node info object as our vehicle for setting the node color
node_info = idaapi.node_info_t()
# NOTE/COMPAT:
if using_ida7api:
set_node_info = idaapi.set_node_info
else:
set_node_info = idaapi.set_node_info2
#
# loop through every node that we have coverage data for, painting them
# in the IDA graph view as applicable.
#
for node_coverage in nodes_coverage:
node_metadata = node_coverage._database._metadata.nodes[node_coverage.address]
# assign the background color we would like to paint to this node
node_info.bg_color = node_coverage.coverage_color
# do the *actual* painting of a single node instance
set_node_info(
node_metadata.function.address,
node_metadata.id,
node_info,
idaapi.NIF_BG_COLOR | idaapi.NIF_FRAME_COLOR
)
self._painted_nodes.add(node_metadata.address)
def clear_nodes(self, nodes_metadata):
"""
Clear paint from the given graph nodes.
"""
# create a node info object as our vehicle for resetting the node color
node_info = idaapi.node_info_t()
node_info.bg_color = idc.DEFCOLOR
# NOTE/COMPAT:
if using_ida7api:
set_node_info = idaapi.set_node_info
else:
set_node_info = idaapi.set_node_info2
#
# loop through every node that we have metadata data for, clearing
# their paint (color) in the IDA graph view as applicable.
#
for node_metadata in nodes_metadata:
# do the *actual* painting of a single node instance
set_node_info(
node_metadata.function.address,
node_metadata.id,
node_info,
idaapi.NIF_BG_COLOR | idaapi.NIF_FRAME_COLOR
)
self._painted_nodes.discard(node_metadata.address)
@idawrite_async
def _paint_nodes(self, nodes_coverage):
"""
Internal routine to force called action to the main thread.
"""
time.sleep(0) # HACK: workaround for the idapython idaapi.MFF_NOWAIT bug
self.paint_nodes(nodes_coverage)
self._action_complete.set()
time.sleep(0) # HACK: workaround for the idapython idaapi.MFF_NOWAIT bug
@idawrite_async
def _clear_nodes(self, nodes_metadata):
"""
Internal routine to force called action to the main thread.
"""
time.sleep(0) # HACK: workaround for the idapython idaapi.MFF_NOWAIT bug
self.clear_nodes(nodes_metadata)
self._action_complete.set()
time.sleep(0) # HACK: workaround for the idapython idaapi.MFF_NOWAIT bug
#------------------------------------------------------------------------------
# Painting - Functions
#------------------------------------------------------------------------------
def paint_function(self, address):
"""
Paint function instructions & nodes with the current database mappings.
"""
# collect function information
function_metadata = self._director.metadata.functions[address]
function_coverage = self._director.coverage.functions.get(address, None)
# function coverage exists, so let's do a cleaner paint
if function_coverage:
#
# ~ instructions ~
#
# compute the painted instructions within this function
painted = self._painted_instructions & function_metadata.instructions
# compute the painted instructions that will not get painted over
stale_instructions = painted - function_coverage.instructions
#
# ~ nodes ~
#
# compute the painted nodes within this function
painted = self._painted_nodes & function_metadata.nodes.viewkeys()
# compute the painted nodes that will not get painted over
stale_nodes_ea = painted - function_coverage.nodes.viewkeys()
stale_nodes = [function_metadata.nodes[ea] for ea in stale_nodes_ea]
#
# ~ painting ~
#
# clear instructions
if not self._async_action(self._clear_instructions, stale_instructions):
return False
# clear nodes
if not self._async_action(self._clear_nodes, stale_nodes):
return False
# paint instructions
if not self._async_action(self._paint_instructions, function_coverage.instructions):
return False
# paint nodes
if not self._async_action(self._paint_nodes, function_coverage.nodes.itervalues()):
return False
# no coverage, just clear the function's instruction & nodes
else:
# clear instructions
if not self._async_action(self._clear_instructions, function_metadata.instructions):
return False
# clear nodes
if not self._async_action(self._clear_nodes, function_metadata.nodes.itervalues()):
return False
# not interrupted
return True
#------------------------------------------------------------------------------
# Painting - HexRays (Decompilation / Source)
#------------------------------------------------------------------------------
def paint_hexrays(self, cfunc, database_coverage):
"""
Paint decompilation text for the given HexRays Window.
"""
logger.debug("Painting HexRays for 0x%X" % cfunc.entry_ea)
# more code-friendly, readable aliases
database_metadata = database_coverage._metadata
decompilation_text = cfunc.get_pseudocode()
#
# the objective here is to paint hexrays lines that are associated with
# our runtime data. unfortunately, there are very few API resources that
# link decompilation line numbers to anything (eg, citems, nodes, ea, etc)
#
# this means that we must build our own data relationships to draw from
#
#
# first, let's build a relationship between a given line of text, and the
# citems that contribute to it. the only way to do that (as I see it) is
# to lex citem ID's out of the decompiled output string
#
line2citem = map_line2citem(decompilation_text)
#
# now that we have some understanding of how citems contribute to each
# line of decompiled text, we can use this information to build a
# relationship that ties graph nodes (basic blocks) to individual lines.
#
line2node = map_line2node(cfunc, database_metadata, line2citem)
# great, now we have all the information we need to paint
#
# paint hexrays output
#
lines_painted = 0
# extract the node addresses that have been hit by our function's mapping data
executed_nodes = set(database_coverage.functions[cfunc.entry_ea].nodes.iterkeys())
#
# now we loop through every line_number of the decompiled text that claims
# to have a relationship with a graph node (basic block) and check to see
# if it contains a node our coverage has marked as executed
#
for line_number, line_nodes in line2node.iteritems():
#
# if there is any intersection of nodes on this line and the coverage
# data's set of executed nodes, we are inclined to color it
#
if line_nodes & executed_nodes:
decompilation_text[line_number].bgcolor = self.palette.ida_coverage
lines_painted += 1
#
# done painting from our mapping data
#
# if there was nothing painted yet, there's no point in continuing...
if not lines_painted:
return
#
# if we made it this far, we must have painted *some* lines inside the
# function. that means we should paint the function decleration, and
# header (variable decleration) lines as their execution will be implied
#
for line_number in xrange(0, cfunc.hdrlines):
decompilation_text[line_number].bgcolor = self.palette.ida_coverage
lines_painted += 1
# finally, refresh the view
idaapi.refresh_idaview_anyway()
def _hxe_callback(self, event, *args):
"""
HexRays event handler.
"""
# decompilation text generation is complete and it is about to be shown
if event == idaapi.hxe_text_ready:
# more code-friendly, readable aliases
vdui = args[0]
cfunc = vdui.cfunc
# if there's no coverage data for this function, there's nothing to do
if not cfunc.entry_ea in self._director.coverage.functions:
return 0
# paint the decompilation text for this function
self.paint_hexrays(cfunc, self._director.coverage)
return 0
#------------------------------------------------------------------------------
# Priority Painting
#------------------------------------------------------------------------------
def _priority_paint(self):
"""
Immediately repaint regions of the database visible to the user.
"""
cursor_address = idaapi.get_screen_ea() # TODO: threadsafe?
# paint functions around the cursor address
painted = self._priority_paint_functions(cursor_address)
# the operation has been interrupted by a repaint request
if self._repaint_requested:
return False
# paint instructions around the cursor address
self._priority_paint_instructions(cursor_address, ignore=painted)
# the operation has been interrupted by a repaint request
if self._repaint_requested:
return False
# succesful completion
return True
def _priority_paint_functions(self, target_address):
"""
Paint functions in the immediate vicinity of the given address.
This will paint both the instructions & graph nodes of defined functions.
"""
database_metadata = self._director.metadata
database_coverage = self._director.coverage
function_instructions = set()
# the number of functions before and after the cursor to paint
FUNCTION_BUFFER = 1
# get the function metadata for the function closest to our cursor
function_metadata = database_metadata.get_closest_function(target_address)
if not function_metadata:
return function_instructions # this will be empty
# select the range of functions around us that we would like to paint
func_num = database_metadata.get_function_num(function_metadata.address)
func_num_start = max(func_num - FUNCTION_BUFFER, 0)
func_num_end = func_num + FUNCTION_BUFFER + 1
# repaint the specified range of functions
for current_num in xrange(func_num_start, func_num_end):
# get the next function to paint
try:
function_metadata = database_metadata.get_function_by_num(current_num)
except IndexError:
continue
# repaint the function
if not self.paint_function(function_metadata.address):
break # paint interrupted
# get the function coverage data for the target address
function_coverage = database_coverage.functions.get(function_metadata.address, None)
if not function_coverage:
continue
# accumulate the painted instructions by this pass
function_instructions |= function_coverage.instructions
# the operation has been interrupted by a repaint request
if self._repaint_requested:
break
# return the addresses of all the instruction we painted over
return function_instructions
def _priority_paint_instructions(self, target_address, ignore=set()):
"""
Paint instructions in the immediate vicinity of the given address.
Optionally, one can provide a set of addresses to ignore while painting.
"""
database_metadata = self._director.metadata
database_coverage = self._director.coverage
# the number of instruction bytes before and after the cursor to paint
INSTRUCTION_BUFFER = 200
# determine range of instructions to repaint
start_address = max(target_address - INSTRUCTION_BUFFER, 0)
end_address = target_address + INSTRUCTION_BUFFER
instructions = set(database_metadata.get_instructions_slice(start_address, end_address))
# remove any instructions painted by the function paints
instructions -= ignore
# mask only the instructions with coverage data in this region
instructions_coverage = instructions & database_coverage.coverage
#
# clear all instructions in this region, repaint the coverage data
#
# clear instructions
if not self._async_action(self._clear_instructions, instructions):
return set()
# paint instructions
if not self._async_action(self._paint_instructions, instructions_coverage):
return set()
# return the instruction addresses painted
return instructions_coverage
#------------------------------------------------------------------------------
# Asynchronous Painting
#------------------------------------------------------------------------------
def _async_database_painter(self):
"""
Asynchronous database painting worker loop.
"""
logger.debug("Starting DatabasePainter thread...")
#
# Asynchronous Database Painting Loop
#
while True:
# wait for the next external repaint request
self._repaint_request.wait()
# if we've been signaled to spindown the painting thread, exit now
if self._end_threads:
break
# clear the repaint flag
self._repaint_request.clear()
self._repaint_requested = False
# more code-friendly, readable aliases
database_coverage = self._director.coverage
database_metadata = self._director.metadata
start = time.time()
#------------------------------------------------------------------
#
# immediately paint the regions of the database the user is looking at
#
if not self._priority_paint():
continue # a repaint was requested
#
# perform a more comprehensive paint
#
# compute the painted instructions that will not get painted over
stale_instructions = self._painted_instructions - database_coverage.coverage
# compute the painted nodes that will not get painted over
stale_nodes_ea = self._painted_nodes - database_coverage.nodes.viewkeys()
stale_nodes = [database_metadata.nodes[ea] for ea in stale_nodes_ea]
# clear instructions
if not self._async_action(self._clear_instructions, stale_instructions):
continue # a repaint was requested
# clear nodes
if not self._async_action(self._clear_nodes, stale_nodes):
continue # a repaint was requested
# paint instructions
if not self._async_action(self._paint_instructions, database_coverage.coverage):
continue # a repaint was requested
# paint nodes
if not self._async_action(self._paint_nodes, database_coverage.nodes.itervalues()):
continue # a repaint was requested
#------------------------------------------------------------------
end = time.time()
logger.debug("Full Paint took %s seconds" % (end - start))
# thread exit
logger.debug("Exiting DatabasePainter thread...")
def _async_action(self, paint_action, work_iterable):
"""
Split a normal paint routine into interruptable chunks.
Internal routine for asynchrnous painting.
"""
CHUNK_SIZE = 800 # somewhat arbitrary
# split the given nodes into multiple paints
for work_chunk in chunks(list(work_iterable), CHUNK_SIZE):
#
# reset the paint event signal so that it is ready for the next
# paint request. it will let us know when the asynchrnous paint
# action has completed in the IDA main thread
#
self._action_complete.clear()
#
# paint or unpaint a chunk of 'work' (nodes, or instructions) with
# the given paint function (eg, paint_nodes, clear_instructions)
#
job_id = paint_action(work_chunk)
assert job_id != -1
#
# wait for the asynchrnous paint event to complete or a signal that
# we should end this thread (via end_threads)
#
while not (self._action_complete.wait(timeout=0.1) or self._end_threads):
continue
#
# our end_threads signal/bool can only originate from the main IDA
# thread (plugin termination). we make the assumption that no more
# MFF_WRITE requests (eg, 'paint_action') will get processed.
#
# we do a best effort to cancel the in-flight job (just in case)
# and return so we can exit the thread.
#
if self._end_threads:
idaapi.cancel_exec_request(job_id)
return False
#
# the operation has been interrupted by a repaint request, bail
# immediately so that we can process the next repaint
#
if self._repaint_requested:
return False
#
# sleep some so we don't choke the main IDA thread
#
time.sleep(.001)
# operation completed successfully
return True
-306
View File
@@ -1,306 +0,0 @@
from lighthouse.util import *
#------------------------------------------------------------------------------
# IDA Plugin Palette
#------------------------------------------------------------------------------
class LighthousePalette(object):
"""
Color Palette for the Lighthouse plugin.
TODO: external theme customization, controls
"""
def __init__(self):
"""
Initialize default palette colors for Lighthouse.
"""
# one-time initialization flag, used for selecting initial palette
self._initialized = False
# the active theme name
self._qt_theme = "Light"
self._ida_theme = "Light"
# the list of available themes
self._themes = \
{
"Dark": 0,
"Light": 1,
}
#
# Coverage Overview
#
self._selection = [QtGui.QColor(100, 0, 130), QtGui.QColor(226, 143, 0)]
self._coverage_bad = [QtGui.QColor(221, 0, 0), QtGui.QColor(207, 31, 0)]
self._coverage_good = [QtGui.QColor(51, 153, 255), QtGui.QColor(75, 209, 42)]
#
# IDA Views / HexRays
#
self._ida_coverage = [0x990000, 0xFFE2A8] # NOTE: IDA uses BBGGRR
#
# Composing Shell
#
self._overview_bg = [QtGui.QColor(20, 20, 20), QtGui.QColor(20, 20, 20)]
self._composer_fg = [QtGui.QColor(255, 255, 255), QtGui.QColor(255, 255, 255)]
self._valid_text = [0x80F0FF, 0x0000FF]
self._invalid_text = [0xF02070, 0xFF0000]
self._invalid_highlight = [0x990000, 0xFF0000]
#
# Composition Grammar
#
self._logic_token = [0xF02070, 0xFF0000]
self._comma_token = [0x00FF00, 0x0000FF]
self._paren_token = [0x40FF40, 0x0000FF]
self._coverage_token = [0x80F0FF, 0x000000]
#--------------------------------------------------------------------------
# Theme Management
#--------------------------------------------------------------------------
@property
def ida_theme(self):
"""
Return the active IDA theme number.
"""
return self._themes[self._ida_theme]
@property
def qt_theme(self):
"""
Return the active Qt theme number.
"""
return self._themes[self._qt_theme]
def refresh_colors(self):
"""
Dynamically compute palette color based on IDA theme.
Depending on if IDA is using a dark or light theme, we *try*
to select colors that will hopefully keep things most readable.
"""
# TODO: temporary until I have a better mechanism to do one-time init
if self._initialized:
return
#
# NOTE/TODO:
#
# the dark table (Qt) theme is way better than the light theme
# right now, so we're just going to force that on for everyone
# for the time being.
#
self._qt_theme = "Dark" # self._qt_theme_hint()
self._ida_theme = self._ida_theme_hint()
# mark the palette as initialized
self._initialized = True
def _ida_theme_hint(self):
"""
Binary hint of the IDA color theme.
This routine returns a best effort hint as to what kind of theme is
in use for the IDA Views (Disas, Hex, HexRays, etc).
Returns 'Dark' or 'Light' indicating the user's theme
"""
#
# determine whether to use a 'dark' or 'light' paint based on the
# background color of the user's IDA text based windows
#
bg_color = get_ida_bg_color()
# return 'Dark' or 'Light'
return test_color_brightness(bg_color)
def _qt_theme_hint(self):
"""
Binary hint of the Qt color theme.
This routine returns a best effort hint as to what kind of theme the
QtWdigets throughout IDA are using. This is to accomodate for users
who may be using Zyantific's IDASkins plugins (or others) to further
customize IDA's appearance.
Returns 'Dark' or 'Light' indicating the user's theme
"""
#
# to determine what kind of Qt based theme IDA is using, we create a
# test widget and check the colors put into the palette the widget
# inherits from the application (eg, IDA).
#
test_widget = QtWidgets.QWidget()
#
# in order to 'realize' the palette used to render (draw) the widget,
# it first must be made visible. since we don't want to be popping
# random widgets infront of the user, so we set this attribute such
# that we can silently bake the widget colors.
#
# NOTE/COMPAT: WA_DontShowOnScreen
#
# https://www.riverbankcomputing.com/news/pyqt-56
#
# lmao, don't ask me why they forgot about this attribute from 5.0 - 5.6
#
if using_pyqt5:
test_widget.setAttribute(103) # taken from http://doc.qt.io/qt-5/qt.html
else:
test_widget.setAttribute(QtCore.Qt.WA_DontShowOnScreen)
# render the (invisible) widget
test_widget.show()
# now we farm the background color from the qwidget
bg_color = test_widget.palette().color(QtGui.QPalette.Window)
# 'hide' & delete the widget
test_widget.hide()
test_widget.deleteLater()
# return 'Dark' or 'Light'
return test_color_brightness(bg_color)
#--------------------------------------------------------------------------
# Coverage Overview
#--------------------------------------------------------------------------
@property
def selection(self):
return self._selection[self.qt_theme]
@property
def coverage_bad(self):
return self._coverage_bad[self.qt_theme]
@property
def coverage_good(self):
return self._coverage_good[self.qt_theme]
#--------------------------------------------------------------------------
# IDA Views / HexRays
#--------------------------------------------------------------------------
@property
def ida_coverage(self):
return self._ida_coverage[self.ida_theme]
#--------------------------------------------------------------------------
# Composing Shell
#--------------------------------------------------------------------------
@property
def overview_bg(self):
return self._overview_bg[self.qt_theme]
@property
def composer_fg(self):
return self._composer_fg[self.qt_theme]
@property
def valid_text(self):
return self._valid_text[self.qt_theme]
@property
def invalid_text(self):
return self._invalid_text[self.qt_theme]
@property
def invalid_highlight(self):
return self._invalid_highlight[self.qt_theme]
#--------------------------------------------------------------------------
# Composition Grammar
#--------------------------------------------------------------------------
@property
def logic_token(self):
return self._logic_token[self.qt_theme]
@property
def comma_token(self):
return self._comma_token[self.qt_theme]
@property
def paren_token(self):
return self._paren_token[self.qt_theme]
@property
def coverage_token(self):
return self._coverage_token[self.qt_theme]
@property
def TOKEN_COLORS(self):
"""
Return the palette of token colors.
"""
return \
{
# logic operators
"OR": self.logic_token,
"XOR": self.logic_token,
"AND": self.logic_token,
"MINUS": self.logic_token,
# misc
"COMMA": self.comma_token,
"LPAREN": self.paren_token,
"RPAREN": self.paren_token,
#"WS": self.whitepsace_token,
#"UNKNOWN": self.unknown_token,
# coverage
"COVERAGE_TOKEN": self.coverage_token,
}
#------------------------------------------------------------------------------
# Palette Util
#------------------------------------------------------------------------------
def test_color_brightness(color):
"""
Test the brightness of a color.
"""
if color.lightness() > 255.0/2:
return "Light"
else:
return "Dark"
def compute_color_on_gradiant(percent, color1, color2):
"""
Compute the color specified by a percent between two colors.
TODO: This is silly, heavy, and can be refactored.
"""
# dump the rgb values from QColor objects
r1, g1, b1, _ = color1.getRgb()
r2, g2, b2, _ = color2.getRgb()
# compute the new color across the gradiant of color1 -> color 2
r = r1 + percent * (r2 - r1)
g = g1 + percent * (g2 - g1)
b = b1 + percent * (b2 - b1)
# return the new color
return QtGui.QColor(r,g,b)
-1
View File
@@ -1 +0,0 @@
from drcov import DrcovData
-29
View File
@@ -1,29 +0,0 @@
import idaapi
from .coverage_overview import *
#------------------------------------------------------------------------------
# IDA Action Handler Stub
#------------------------------------------------------------------------------
class IDACtxEntry(idaapi.action_handler_t):
"""
A basic Context Menu class to utilize IDA's action handlers.
"""
def __init__(self, action_function):
idaapi.action_handler_t.__init__(self)
self.action_function = action_function
def activate(self, ctx):
"""
Execute the embedded action_function when this context menu is invoked.
"""
self.action_function()
return 1
def update(self, ctx):
"""
Ensure the context menu is always available in IDA.
"""
return idaapi.AST_ENABLE_ALWAYS
-977
View File
@@ -1,977 +0,0 @@
import string
import logging
import weakref
from operator import itemgetter, attrgetter
import idaapi
from lighthouse.util import *
from .coverage_combobox import CoverageComboBox
from lighthouse.composer import ComposingShell
from lighthouse.metadata import FunctionMetadata, metadata_progress
from lighthouse.coverage import FunctionCoverage
logger = logging.getLogger("Lighthouse.UI.Overview")
#------------------------------------------------------------------------------
# Constants Defintion
#------------------------------------------------------------------------------
# declare named constants for coverage table column indexes
COV_PERCENT = 0
FUNC_NAME = 1
FUNC_ADDR = 2
BLOCKS_HIT = 3
INST_HIT = 4
FUNC_SIZE = 5
COMPLEXITY = 6
FINAL_COLUMN = 7
# column -> field name mapping
COLUMN_TO_FIELD = \
{
COV_PERCENT: "instruction_percent",
FUNC_NAME: "name",
FUNC_ADDR: "address",
BLOCKS_HIT: "nodes_executed",
INST_HIT: "instructions_executed",
FUNC_SIZE: "size",
COMPLEXITY: "cyclomatic_complexity"
}
# column headers of the table
SAMPLE_CONTENTS = \
[
" 100.00% ",
" sub_140001B20 ",
" 0x140001b20 ",
" 100 / 100 ",
" 1000 / 1000 ",
" 10000000 ",
" 1000000 "
]
#------------------------------------------------------------------------------
# Pseudo Widget Filter
#------------------------------------------------------------------------------
debugger_docked = False
class EventProxy(QtCore.QObject):
def __init__(self, target):
super(EventProxy, self).__init__()
self._target = target
def eventFilter(self, source, event):
#
# hook the destroy event of the coverage overview widget so that we can
# cleanup after ourselves in the interest of stability
#
if int(event.type()) == 16: # NOTE/COMPAT: QtCore.QEvent.Destroy not in IDA7?
self._target.terminate()
#
# this is an unknown event, but it seems to fire when the widget is
# being saved/restored by a QMainWidget. We use this to try and ensure
# the Coverage Overview stays docked when flipping between Reversing
# and Debugging states in IDA.
#
# See issue #16 on github for more information.
#
if int(event.type()) == 2002:
#
# if the general registers IDA View exists, we make the assumption
# that the user has probably started debugging.
#
# NOTE / COMPAT:
if using_ida7api:
debug_mode = bool(idaapi.find_widget("General registers"))
else:
debug_mode = bool(idaapi.find_tform("General registers"))
#
# if this is the first time the user has started debugging, dock
# the coverage overview in the debug QMainWidget workspace. its
# dock status / position should persist future debugger launches.
#
global debugger_docked
if debug_mode and not debugger_docked:
idaapi.set_dock_pos(self._target._title, "Structures", idaapi.DP_TAB)
debugger_docked = True
return False
#------------------------------------------------------------------------------
# Coverage Overview
#------------------------------------------------------------------------------
class CoverageOverview(DockableShim):
"""
The Coverage Overview Widget.
"""
def __init__(self, director):
super(CoverageOverview, self).__init__(
"Coverage Overview",
plugin_resource(os.path.join("icons", "overview.png"))
)
# local reference to the director
self._director = director
# underlying data model for the coverage overview
self._model = CoverageModel(director, self._widget)
# pseudo widget science
self._visible = False
self._events = EventProxy(self)
self._widget.installEventFilter(self._events)
# initialize the plugin UI
self._ui_init()
# refresh the data UI such that it reflects the most recent data
self.refresh()
#--------------------------------------------------------------------------
# Pseudo Widget Functions
#--------------------------------------------------------------------------
def show(self):
"""
Show the CoverageOverview UI / widget.
"""
self.refresh()
super(CoverageOverview, self).show()
self._visible = True
def terminate(self):
"""
The CoverageOverview is being hidden / deleted.
"""
self._visible = False
self._model = None
self._widget = None
def isVisible(self):
return self._visible
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init(self):
"""
Initialize UI elements.
"""
# initialize a monospace font to use with our widget(s)
self._font = MonospaceFont()
self._font_metrics = QtGui.QFontMetricsF(self._font)
# initialize our ui elements
self._ui_init_table()
self._ui_init_toolbar()
self._ui_init_ctx_menu_actions()
self._ui_init_signals()
# layout the populated ui just before showing it
self._ui_layout()
def _ui_init_table(self):
"""
Initialize the coverage table.
"""
palette = self._director._palette
self._table = QtWidgets.QTableView()
self._table.setFocusPolicy(QtCore.Qt.NoFocus)
self._table.setStyleSheet(
"QTableView { gridline-color: black; background-color: %s } " % palette.overview_bg.name() +
"QTableView::item:selected { color: white; background-color: %s; } " % palette.selection.name()
)
# set these properties so the user can arbitrarily shrink the table
self._table.setMinimumHeight(0)
self._table.setSizePolicy(
QtWidgets.QSizePolicy.Ignored,
QtWidgets.QSizePolicy.Ignored
)
# install the underlying data source for the table
self._table.setModel(self._model)
# set the initial column widths for the table
for i in xrange(len(SAMPLE_CONTENTS)):
rect = self._font_metrics.boundingRect(SAMPLE_CONTENTS[i])
self._table.setColumnWidth(i, rect.width())
# table selection should be by row, not by cell
self._table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
# more code-friendly, readable aliases
vh = self._table.verticalHeader()
hh = self._table.horizontalHeader()
# NOTE/COMPAT: set the row heights as fixed
if using_pyqt5:
vh.setSectionResizeMode(QtWidgets.QHeaderView.Fixed)
else:
vh.setResizeMode(QtWidgets.QHeaderView.Fixed)
# specify the fixed row height in pixels
vh.setDefaultSectionSize(int(self._font_metrics.height()))
# hide the vertical header themselves as we don't need them
vh.hide()
# stretch the last column (which is blank)
hh.setStretchLastSection(True)
# disable bolding of table column headers when table is selected
hh.setHighlightSections(False)
# allow sorting of the table, and initialize the sort indicator
self._table.setSortingEnabled(True)
hh.setSortIndicator(FUNC_ADDR, QtCore.Qt.AscendingOrder)
def _ui_init_toolbar(self):
"""
Initialize the coverage toolbar.
"""
# initialize toolbar elements
self._ui_init_toolbar_elements()
# populate the toolbar
self._toolbar = QtWidgets.QToolBar()
#
# customize the style of the bottom toolbar specifically, we are
# interested in tweaking the seperator and item padding.
#
self._toolbar.setStyleSheet(
"""
QToolBar::separator
{
background-color: #909090;
width: 2px;
margin: 0 0.5em 0 0.5em
}
""")
# populate the toolbar with all our subordinates
self._toolbar.addWidget(self._splitter)
self._toolbar.addSeparator()
self._toolbar.addWidget(self._hide_zero_label)
self._toolbar.addWidget(self._hide_zero_checkbox)
def _ui_init_toolbar_elements(self):
"""
Initialize the coverage toolbar UI elements.
"""
# the composing shell
self._shell = ComposingShell(
self._director,
weakref.proxy(self._model),
self._table
)
# the coverage combobox
self._combobox = CoverageComboBox(self._director)
# the checkbox to hide 0% coverage entries
self._hide_zero_label = QtWidgets.QLabel("Hide 0% Coverage: ")
self._hide_zero_label.setFont(self._font)
self._hide_zero_checkbox = QtWidgets.QCheckBox()
# the splitter to make the shell / combobox resizable
self._splitter = QtWidgets.QSplitter(QtCore.Qt.Horizontal)
self._splitter.setStyleSheet(
"""
QSplitter::handle
{
background-color: #909090;
width: 2px;
height: 2px;
margin: 0 0.5em 0 0.5em
}
QSplitter::handle:horizontal:hover
{
background-color: #3399FF;
}
""")
# add the child items we wish to put the 'splitter' between
self._splitter.addWidget(self._shell)
self._splitter.addWidget(self._combobox)
# this makes the splitter responsive to hover events
self._splitter.handle(1).setAttribute(QtCore.Qt.WA_Hover)
# give the shell expansion preference over the combobox
self._splitter.setStretchFactor(0, 1)
def _ui_init_ctx_menu_actions(self):
"""
Initialize the right click context menu actions.
"""
# function actions
self._action_rename = QtWidgets.QAction("Rename", None)
self._action_copy_name = QtWidgets.QAction("Copy name", None)
self._action_copy_address = QtWidgets.QAction("Copy address", None)
# function prefixing actions
self._action_prefix = QtWidgets.QAction("Prefix selected functions", None)
self._action_clear_prefix = QtWidgets.QAction("Clear prefixes", None)
# misc actions
self._action_refresh_metadata = QtWidgets.QAction("Full refresh (slow)", None)
def _ui_init_signals(self):
"""
Connect UI signals.
"""
# jump to disassembly on table row double click
self._table.doubleClicked.connect(self._ui_entry_double_click)
# right click popup menu
self._table.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
self._table.customContextMenuRequested.connect(self._ui_ctx_menu_handler)
# toggle 0% coverage checkbox
self._hide_zero_checkbox.stateChanged.connect(self._ui_hide_zero_toggle)
def _ui_layout(self):
"""
Layout the major UI elements of the widget.
"""
# layout the major elements of our widget
layout = QtWidgets.QGridLayout()
layout.addWidget(self._table)
layout.addWidget(self._toolbar)
# apply the layout to the containing form
self._widget.setLayout(layout)
#--------------------------------------------------------------------------
# Signal Handlers
#--------------------------------------------------------------------------
def _ui_entry_double_click(self, index):
"""
Handle double click event on the coverage table.
A double click on the coverage table view will jump the user to
the corresponding function in the IDA disassembly view.
"""
idaapi.jumpto(self._model.row2func[index.row()])
def _ui_ctx_menu_handler(self, position):
"""
Handle right click context menu event on the coverage table.
"""
# create a right click menu based on the state and context
ctx_menu = self._populate_ctx_menu()
if not ctx_menu:
return
# show the popup menu to the user, and wait for their selection
action = ctx_menu.exec_(self._table.viewport().mapToGlobal(position))
# process the user action
self._process_ctx_menu_action(action)
def _ui_hide_zero_toggle(self, checked):
"""
Handle state change of 'Hide 0% Coverage' checkbox.
"""
self._model.filter_zero_coverage(checked)
#--------------------------------------------------------------------------
# Context Menu
#--------------------------------------------------------------------------
def _populate_ctx_menu(self):
"""
Populate a context menu for the table view based on selection.
Returns a populated QMenu, or None.
"""
# get the list rows currently selected in the coverage table
selected_rows = self._table.selectionModel().selectedRows()
if len(selected_rows) == 0:
return None
# the context menu we will dynamically populate
ctx_menu = QtWidgets.QMenu()
#
# if there is only one table entry (a function) selected, then
# show the menu actions available for a single function such as
# copy function name, address, or renaming the function.
#
if len(selected_rows) == 1:
ctx_menu.addAction(self._action_rename)
ctx_menu.addAction(self._action_copy_name)
ctx_menu.addAction(self._action_copy_address)
ctx_menu.addSeparator()
# function prefixing actions
ctx_menu.addAction(self._action_prefix)
ctx_menu.addAction(self._action_clear_prefix)
ctx_menu.addSeparator()
# misc actions
ctx_menu.addAction(self._action_refresh_metadata)
# return the completed context menu
return ctx_menu
def _process_ctx_menu_action(self, action):
"""
Process the given (user selected) context menu action.
"""
# a right click menu action was not clicked. nothing else to do
if not action:
return
# get the list rows currently selected in the coverage table
selected_rows = self._table.selectionModel().selectedRows()
if len(selected_rows) == 0:
return
#
# extract the function addresses for the list of selected rows
# as they will probably come in handy later.
#
function_addresses = []
for index in selected_rows:
address = self._model.row2func[index.row()]
function_addresses.append(address)
#
# check the universal actions first
#
# handle the 'Prefix functions' action
if action == self._action_prefix:
gui_prefix_functions(function_addresses)
# handle the 'Clear prefix' action
elif action == self._action_clear_prefix:
clear_prefixes(function_addresses)
# handle the 'Refresh metadata' action
elif action == self._action_refresh_metadata:
idaapi.show_wait_box("Building database metadata...")
self._director.refresh()
# ensure the table's model gets refreshed
idaapi.replace_wait_box("Refreshing Coverage Overview...")
self.refresh()
# all done
idaapi.hide_wait_box()
#
# the following actions are only applicable if there is only one
# row/function selected in the coverage overview table. don't
# bother to check multi-function selections against these
#
if len(selected_rows) != 1:
return
# unpack the single QModelIndex
index = selected_rows[0]
function_address = function_addresses[0]
# handle the 'Rename' action
if action == self._action_rename:
gui_rename_function(function_address)
# handle the 'Copy name' action
elif action == self._action_copy_name:
name_index = self._model.index(index.row(), FUNC_NAME)
function_name = self._model.data(name_index, QtCore.Qt.DisplayRole)
copy_to_clipboard(function_name)
# handle the 'Copy address' action
elif action == self._action_copy_address:
address_string = "0x%X" % function_address
copy_to_clipboard(address_string)
#--------------------------------------------------------------------------
# Refresh
#--------------------------------------------------------------------------
@idafast
def refresh(self):
"""
Refresh the Coverage Overview.
"""
self._model.refresh()
self._shell.refresh()
self._combobox.refresh()
#------------------------------------------------------------------------------
# Coverage Table - TableModel
#------------------------------------------------------------------------------
class CoverageModel(QtCore.QAbstractTableModel):
"""
A Qt model interface to format coverage data for Qt views.
"""
def __init__(self, director, parent=None):
super(CoverageModel, self).__init__(parent)
self._blank_coverage = FunctionCoverage(idaapi.BADADDR)
# local reference to the director
self._director = director
# mapping to correlate a given row in the table to its function coverage
self.row2func = {}
self._row_count = 0
# internal mappings of the explicit data / coverage we render
self._no_coverage = []
self._visible_metadata = {}
self._visible_coverage = {}
# column headers of the table
self._column_headers = \
{
COV_PERCENT: "Coverage %",
FUNC_NAME: "Function Name",
FUNC_ADDR: "Address",
BLOCKS_HIT: "Blocks Hit",
INST_HIT: "Instructions Hit",
FUNC_SIZE: "Function Size",
COMPLEXITY: "Complexity",
FINAL_COLUMN: "" # NOTE: stretch section, left blank for now
}
# initialize a monospace font to use with our widget(s)
self._font = MonospaceFont()
self._font_metrics = QtGui.QFontMetricsF(self._font)
#----------------------------------------------------------------------
# Sorting
#----------------------------------------------------------------------
# members to enlighten the model to its last known sort state
self._last_sort = FUNC_ADDR
self._last_sort_order = QtCore.Qt.AscendingOrder
#----------------------------------------------------------------------
# Filters
#----------------------------------------------------------------------
# OPTION: display 0% coverage entries
self._hide_zero = False
# OPTION: display functions matching search_string (substring)
self._search_string = ""
#----------------------------------------------------------------------
# Signals
#----------------------------------------------------------------------
# register for cues from the director
self._director.coverage_switched(self._internal_refresh)
self._director.coverage_modified(self._internal_refresh)
self._director.metadata_modified(self._data_changed)
#--------------------------------------------------------------------------
# AbstractItemModel Overloads
#--------------------------------------------------------------------------
def flags(self, index):
return QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable
def rowCount(self, index=QtCore.QModelIndex()):
"""
The number of table rows.
"""
return self._row_count
def columnCount(self, index=QtCore.QModelIndex()):
"""
The number of table columns.
"""
return len(self._column_headers)
def headerData(self, column, orientation, role=QtCore.Qt.DisplayRole):
"""
Define the properties of the the table rows & columns.
"""
if orientation == QtCore.Qt.Horizontal:
# the title of the header columns has been requested
if role == QtCore.Qt.DisplayRole:
try:
return self._column_headers[column]
except KeyError as e:
pass
# the text alignment of the header has beeen requested
elif role == QtCore.Qt.TextAlignmentRole:
# center align all columns
return QtCore.Qt.AlignHCenter
# unhandeled header request
return None
def data(self, index, role=QtCore.Qt.DisplayRole):
"""
Define how Qt should access the underlying model data.
"""
# data display request
if role == QtCore.Qt.DisplayRole:
# grab for speed
column = index.column()
# lookup the function info for this row
try:
function_address = self.row2func[index.row()]
function_metadata = self._director.metadata.functions[function_address]
#
# if we hit a KeyError, it is probably because the database metadata
# is being refreshed and the model (this object) has yet to be
# updated.
#
# this should only ever happen as a result of the user using the
# right click 'Refresh metadata' action. And even then, only when
# a function they undefined in the IDB is visible in the coverage
# overview table view.
#
# In theory, the table should get refreshed *after* the metadata
# refresh completes. So for now, we simply return return the filler
# string '?'
#
except KeyError:
return "?"
#
# remember, if a function does *not* have coverage data, it will
# not have an entry in the coverage map. that means we should
# yield a default, 'blank', coverage item in these instances
#
function_coverage = self._director.coverage.functions.get(
function_address,
self._blank_coverage
)
# Coverage % - (by instruction execution)
if column == COV_PERCENT:
return "%5.2f%%" % (function_coverage.instruction_percent*100)
# Function Name
elif column == FUNC_NAME:
return function_metadata.name
# Function Address
elif column == FUNC_ADDR:
return "0x%X" % function_metadata.address
# Basic Blocks
elif column == BLOCKS_HIT:
return "%3u / %-3u" % (function_coverage.nodes_executed,
function_metadata.node_count)
# Instructions Hit
elif column == INST_HIT:
return "%4u / %-4u" % (function_coverage.instructions_executed,
function_metadata.instruction_count)
# Function Size
elif column == FUNC_SIZE:
return "%u" % function_metadata.size
# Cyclomatic Complexity
elif column == COMPLEXITY:
return "%u" % function_metadata.cyclomatic_complexity
# cell background color request
elif role == QtCore.Qt.BackgroundRole:
function_address = self.row2func[index.row()]
function_coverage = self._director.coverage.functions.get(
function_address,
self._blank_coverage
)
return function_coverage.coverage_color
# font color request
elif role == QtCore.Qt.ForegroundRole:
return QtGui.QColor(QtCore.Qt.white)
# font format request
elif role == QtCore.Qt.FontRole:
return self._font
# text alignment request
elif role == QtCore.Qt.TextAlignmentRole:
return QtCore.Qt.AlignCenter
# unhandeled request, nothing to do
return None
#----------------------------------------------------------------------
# Sorting
#----------------------------------------------------------------------
def sort(self, column, sort_order):
"""
Sort coverage data model by column.
"""
#
# look up the name of field in the FunctionCoverage class object
# that we would like to sort by based on the selected column
#
try:
sort_field = COLUMN_TO_FIELD[column]
# column has not been enlightened to sorting
except KeyError as e:
logger.warning("TODO: implement column %u sorting" % column)
self.layoutChanged.emit()
return
#
# NOTE: attrgetter appears to profile ~8-12% faster than lambdas
# accessing the member on the member, hence the strange paradigm
#
# sort the table entries by a function metadata attribute
if column in [FUNC_NAME, FUNC_ADDR, FUNC_SIZE, COMPLEXITY]:
sorted_functions = sorted(
self._visible_metadata.itervalues(),
key=attrgetter(sort_field),
reverse=sort_order
)
# sort the table entries by a function coverage attribute
elif column in [COV_PERCENT, BLOCKS_HIT, INST_HIT]:
sorted_functions = sorted(
self._visible_coverage.itervalues(),
key=attrgetter(sort_field),
reverse=sort_order
)
#
# we sorted only the functions items that have known coverage.
# but since some functions may not have had coverage, they were
# not included in the sort.
#
# we simply append (or prepend) these unsortable (no coverage)
# functions to the sorted list as they are still members of
# the visible set regardless of their coverage status
#
#
# if the sort was descending (100% --> 0%), the no_coverage
# items (0%) should be appended to the *end*
#
if sort_order:
sorted_functions += self._no_coverage
#
# if the sort was ascending (0% --> 100%), the no_coverage
# items (0%) should be prepended to the *front*
#
else:
sorted_functions = self._no_coverage + sorted_functions
# create a generator of the sorted function addresses
sorted_addresses = (x.address for x in sorted_functions)
# finally, rebuild the row2func mapping and notify views of this change
self.row2func = dict(zip(xrange(len(sorted_functions)), sorted_addresses))
self.func2row = {v: k for k, v in self.row2func.iteritems()}
self.layoutChanged.emit()
# save the details of this sort event as they may be needed later
self._last_sort = column
self._last_sort_order = sort_order
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
def get_modeled_coverage_percent(self):
"""
Get the coverage % represented by the current (visible) model.
"""
# sum the # of instructions in all the visible functions
instruction_count = sum(
meta.instruction_count for meta in self._visible_metadata.itervalues()
)
# sum the # of instructions executed in all the visible functions
instructions_executed = sum(
cov.instructions_executed for cov in self._visible_coverage.itervalues()
)
# compute coverage percentage of the visible functions
return (float(instructions_executed) / (instruction_count or 1))*100
#--------------------------------------------------------------------------
# Filters
#--------------------------------------------------------------------------
def filter_zero_coverage(self, hide=True):
"""
Filter out zero coverage functions from the model.
"""
# the hide/unhide request matches the current state, ignore
if self._hide_zero == hide:
return
# the filter is changing states, so we need to recompute the model
self._hide_zero = hide
self._internal_refresh()
def filter_string(self, search_string):
"""
Filter out functions whose names do not contain the given substring.
"""
# the filter string matches the current string, ignore
if search_string == self._search_string:
return
# the filter is changing states, so we need to recompute the model
self._search_string = search_string
self._internal_refresh()
#--------------------------------------------------------------------------
# Refresh
#--------------------------------------------------------------------------
def refresh(self):
"""
Public refresh of the coverage model.
"""
self._internal_refresh()
@idafast
def _internal_refresh(self):
"""
Internal refresh of the coverage model.
"""
self._refresh_data()
# sort the data set according to the last selected sorted column
self.sort(self._last_sort, self._last_sort_order)
@idafast
def _data_changed(self):
"""
Notify attached views that simple model data has been updated/modified.
"""
self.dataChanged.emit(QtCore.QModelIndex(), QtCore.QModelIndex())
def _refresh_data(self):
"""
Initialize the mapping to go from displayed row to function.
"""
row = 0
self.row2func = {}
self.func2row = {}
self._row_count = 0
self._no_coverage = []
self._visible_coverage = {}
self._visible_metadata = {}
metadata = self._director.metadata
coverage = self._director.coverage
#
# if the search string is all lowercase, then we are going to perform
# a case insensitive search/filter.
#
# that means we we want to 'normalize' all the function names by
# making them lowercase before searching for our needle (search str)
#
normalize = lambda x: x
if not (set(self._search_string) & set(string.ascii_uppercase)):
normalize = lambda x: string.lower(x)
#
# it's time to rebuild the list of coverage items to make visible in
# the coverage overview list. during this process, we filter out entries
# that do not meet the criteria as specified by the user.
#
# loop through *all* the functions as defined in the active metadata
for function_address in metadata.functions.iterkeys():
#------------------------------------------------------------------
# Filters - START
#------------------------------------------------------------------
# OPTION: ignore items with 0% coverage items
if self._hide_zero and not function_address in coverage.functions:
continue
# OPTIONS: ignore items that do not match the search string
if not self._search_string in normalize(metadata.functions[function_address].name):
continue
#------------------------------------------------------------------
# Filters - END
#------------------------------------------------------------------
# store a reference to the listed function's metadata
self._visible_metadata[function_address] = metadata.functions[function_address]
# store a reference to the listed function's coverage
if function_address in coverage.functions:
self._visible_coverage[function_address] = coverage.functions[function_address]
# reminder: coverage is *not* guaranteed :-)
else:
self._no_coverage.append(metadata.functions[function_address])
# map the function address to a visible row # for easy lookup
self.row2func[row] = function_address
row += 1
# build the inverse func --> row mapping
self.func2row = {v: k for k, v in self.row2func.iteritems()}
# bake the final number of rows into the model
self._row_count = len(self.row2func)
-6
View File
@@ -1,6 +0,0 @@
from .ida import *
from .misc import *
from .debug import *
from .log import lmsg, logging_started, start_logging
from .shims import using_ida7api, using_pyqt5, QtCore, QtGui, QtWidgets, DockableShim
-75
View File
@@ -1,75 +0,0 @@
import cProfile
#------------------------------------------------------------------------------
# Debug
#------------------------------------------------------------------------------
#
# This file contains random snippets of code used while developing and
# debugging parts of lighthouse. I don't expect any of this code to be
# active or in use for major releases.
#
#------------------------------------------------------------------------------
# Call Profiling
#------------------------------------------------------------------------------
pr = cProfile.Profile()
def profile(func):
"""
A simple function profiling decorator.
"""
def wrap(*args, **kwargs):
global pr
pr.enable()
result = func(*args, **kwargs)
pr.disable()
pr.print_stats(sort="tottime")
return result
return wrap
#------------------------------------------------------------------------------
# Line Profiler
#------------------------------------------------------------------------------
#
# Function Profiling
#
# from: https://gist.github.com/sibelius/3920b3eb5adab482b105
try:
from line_profiler import LineProfiler
def line_profile(func):
def profiled_func(*args, **kwargs):
try:
profiler = LineProfiler()
profiler.add_function(func)
profiler.enable_by_count()
return func(*args, **kwargs)
finally:
profiler.print_stats()
return profiled_func
except ImportError:
def line_profile(func):
def nothing(*args, **kwargs):
return func(*args, **kwargs)
return nothing
#
# Module Profiling
#
if False:
from line_profiler import LineProfiler
lpr = LineProfiler()
# change this to the target file / module to profile
import lighthouse.metadata as metadata_module
lpr.add_module(metadata_module)
# put this code somewhere to dump results:
#global lpr
#lpr.enable_by_count()
#lpr.disable_by_count()
#lpr.print_stats()
-712
View File
@@ -1,712 +0,0 @@
import time
import Queue
import logging
import binascii
import functools
import idaapi
from .shims import using_ida7api, using_pyqt5, QtCore, QtGui, QtWidgets
logger = logging.getLogger("Lighthouse.Util.IDA")
#------------------------------------------------------------------------------
# HexRays Util
#------------------------------------------------------------------------------
def map_line2citem(decompilation_text):
"""
Map decompilation line numbers to citems.
-----------------------------------------------------------------------
This function allows us to build a relationship between citems in the
ctree and specific lines in the hexrays decompilation text.
-----------------------------------------------------------------------
Output:
+- line2citem:
| a map keyed with line numbers, holding sets of citem indexes
|
| eg: { int(line_number): sets(citem_indexes), ... }
'
"""
line2citem = {}
#
# it turns out that citem indexes are actually stored inline with the
# decompilation text output, hidden behind COLOR_ADDR tokens.
#
# here we pass each line of raw decompilation text to our crappy lexer,
# extracting any COLOR_ADDR tokens as citem indexes
#
for line_number in xrange(decompilation_text.size()):
line_text = decompilation_text[line_number].line
line2citem[line_number] = lex_citem_indexes(line_text)
#logger.debug("Line Text: %s" % binascii.hexlify(line_text))
return line2citem
def map_line2node(cfunc, metadata, line2citem):
"""
Map decompilation line numbers to node (basic blocks) addresses.
-----------------------------------------------------------------------
This function allows us to build a relationship between graph nodes
(basic blocks) and specific lines in the hexrays decompilation text.
-----------------------------------------------------------------------
Output:
+- line2node:
| a map keyed with line numbers, holding sets of node addresses
|
| eg: { int(line_number): set(nodes), ... }
'
"""
line2node = {}
treeitems = cfunc.treeitems
function_address = cfunc.entry_ea
#
# prior to this function, a line2citem map was built to tell us which
# citems reside on any given line of text in the decompilation output.
#
# now, we walk through this line2citem map one 'line_number' at a time in
# an effort to resolve the set of graph nodes associated with its citems.
#
for line_number, citem_indexes in line2citem.iteritems():
nodes = set()
#
# we are at the level of a single line (line_number). we now consume
# its set of citems (citem_indexes) and attempt to identify the explict
# graph nodes they claim to be sourced from (by their reported EA)
#
for index in citem_indexes:
# get the code address of the given citem
try:
item = treeitems[index]
address = item.ea
# apparently this is a thing on IDA 6.95
except IndexError as e:
continue
# find the graph node (eg, basic block) that generated this citem
node = metadata.get_node(address)
# address not mapped to a node... weird. continue to the next citem
if not node:
#logger.warning("Failed to map node to basic block")
continue
#
# we made it this far, so we must have found a node that contains
# this citem. save the computed node_id to the list of of known
# nodes we have associated with this line of text
#
nodes.add(node.address)
#
# finally, save the completed list of node ids as identified for this
# line of decompilation text to the line2node map that we are building
#
line2node[line_number] = nodes
# all done, return the computed map
return line2node
def lex_citem_indexes(line):
"""
Lex all ctree item indexes from a given line of text.
-----------------------------------------------------------------------
The HexRays decompiler output contains invisible text tokens that can
be used to attribute spans of text to the ctree items that produced them.
This function will simply scrape and return a list of all the these
tokens (COLOR_ADDR) which contain item indexes into the ctree.
"""
i = 0
indexes = []
line_length = len(line)
# lex COLOR_ADDR tokens from the line of text
while i < line_length:
# does this character mark the start of a new COLOR_* token?
if line[i] == idaapi.COLOR_ON:
# yes, so move past the COLOR_ON byte
i += 1
# is this sequence for a COLOR_ADDR?
if ord(line[i]) == idaapi.COLOR_ADDR:
# yes, so move past the COLOR_ADDR byte
i += 1
#
# A COLOR_ADDR token is followed by either 8, or 16 characters
# (a hex encoded number) that represents an address/pointer.
# in this context, it is actually the index number of a citem
#
citem_index = int(line[i:i+idaapi.COLOR_ADDR_SIZE], 16)
i += idaapi.COLOR_ADDR_SIZE
# save the extracted citem index
indexes.append(citem_index)
# skip to the next iteration as i has moved
continue
# nothing we care about happened, keep lexing forward
i += 1
# return all the citem indexes extracted from this line of text
return indexes
#------------------------------------------------------------------------------
# Misc
#------------------------------------------------------------------------------
def touch_window(target):
"""
Touch a window/widget/form to ensure it gets drawn by IDA.
XXX/HACK:
We need to ensure that widget we will analyze actually gets drawn
so that there are colors for us to steal.
To do this, we switch to it, and switch back. I tried a few different
ways to trigger this from Qt, but could only trigger the full
painting by going through the IDA routines.
"""
# get the currently active widget/form title (the form itself seems transient...)
if using_ida7api:
twidget = idaapi.get_current_widget()
title = idaapi.get_widget_title(twidget)
else:
form = idaapi.get_current_tform()
title = idaapi.get_tform_title(form)
# touch/draw the widget by playing musical chairs
if using_ida7api:
# touch the target window by switching to it
idaapi.activate_widget(target, True)
flush_ida_sync_requests()
# locate our previous selection
previous_twidget = idaapi.find_widget(title)
# return us to our previous selection
idaapi.activate_widget(previous_twidget, True)
flush_ida_sync_requests()
else:
# touch the target window by switching to it
idaapi.switchto_tform(target, True)
flush_ida_sync_requests()
# locate our previous selection
previous_form = idaapi.find_tform(title)
# lookup our original form and switch back to it
idaapi.switchto_tform(previous_form, True)
flush_ida_sync_requests()
def get_ida_bg_color():
"""
Get the background color of an IDA disassembly view.
-----------------------------------------------------------------------
The necessity of this function is pretty silly. I would like lighthouse
to be color-aware of the user's IDA theme such that it selects reasonable
colors that maintain readability.
Since there is no supported way to probe the palette & colors in use by
IDA, we must get creative. This function attempts to locate an IDA
disassembly view, and take a screenshot of said widget. It will then
attempt to extract the color of a single background pixel (hopefully).
PS: please expose the get_graph_color(...) palette accessor, Ilfak ;_;
"""
if using_ida7api:
return get_ida_bg_color_ida7()
else:
return get_ida_bg_color_ida6()
def get_ida_bg_color_ida7():
"""
Get the background color of an IDA disassembly view. (IDA 7+)
"""
names = ["Enums", "Structures"]
names += ["Hex View-%u" % i for i in range(5)]
names += ["IDA View-%c" % chr(ord('A') + i) for i in range(5)]
# find a form (eg, IDA view) to analyze colors from
for window_name in names:
twidget = idaapi.find_widget(window_name)
if twidget:
break
else:
raise RuntimeError("Failed to find donor view")
# touch the target form so we know it is populated
touch_window(twidget)
# locate the Qt Widget for a form and take 1px image slice of it
import sip
widget = sip.wrapinstance(long(twidget), QtWidgets.QWidget)
pixmap = widget.grab(QtCore.QRect(0, 10, widget.width(), 1))
# convert the raw pixmap into an image (easier to interface with)
image = QtGui.QImage(pixmap.toImage())
# return the predicted background color
return QtGui.QColor(predict_bg_color(image))
def get_ida_bg_color_ida6():
"""
Get the background color of an IDA disassembly view. (IDA 6.x)
"""
names = ["Enums", "Structures"]
names += ["Hex View-%u" % i for i in range(5)]
names += ["IDA View-%c" % chr(ord('A') + i) for i in range(5)]
# find a form (eg, IDA view) to analyze colors from
for window_name in names:
form = idaapi.find_tform(window_name)
if form:
break
else:
raise RuntimeError("Failed to find donor View")
# touch the target form so we know it is populated
touch_window(form)
# locate the Qt Widget for a form and take 1px image slice of it
if using_pyqt5:
widget = idaapi.PluginForm.FormToPyQtWidget(form)
pixmap = widget.grab(QtCore.QRect(0, 10, widget.width(), 1))
else:
widget = idaapi.PluginForm.FormToPySideWidget(form)
region = QtCore.QRect(0, 10, widget.width(), 1)
pixmap = QtGui.QPixmap.grabWidget(widget, region)
# convert the raw pixmap into an image (easier to interface with)
image = QtGui.QImage(pixmap.toImage())
# return the predicted background color
return QtGui.QColor(predict_bg_color(image))
def predict_bg_color(image):
"""
Predict the background color of an IDA View from a given image slice.
We hypothesize that the 'background color' of a given image slice of an
IDA form will be the color that appears in the longest 'streaks' or
continuous sequences. This will probably be true 99% of the time.
This function takes an image, and analyzes its first row of pixels. It
will return the color that it believes to be the 'background color' based
on its sequence length.
"""
assert image.width() and image.height()
# the details for the longest known color streak will be saved in these
longest = 1
speculative_bg = image.pixel(0, 0)
# this will be the computed length of the current color streak
sequence = 1
# find the longest streak of color in a single pixel slice
for x in xrange(1, image.width()):
# the color of this pixel matches the last pixel, extend the streak count
if image.pixel(x, 0) == image.pixel(x-1,0):
sequence += 1
#
# this catches the case where the longest color streak is in fact
# the last one. this ensures the streak color will get saved.
#
if x != image.width():
continue
# color change, determine if this was the longest continuous color streak
if sequence > longest:
# save the last pixel as the longest seqeuence / most likely BG color
longest = sequence
speculative_bg = image.pixel(x-1, 0)
# reset the sequence counter
sequence = 1
# return the color we speculate to be the background color
return speculative_bg
#------------------------------------------------------------------------------
# IDA execute_sync decorators
#------------------------------------------------------------------------------
# from: Will Ballenthin
# http://www.williballenthin.com/blog/2015/09/04/idapython-synchronization-decorator
#
def idafast(f):
"""
Decorator for marking a function as fast / UI event
"""
@functools.wraps(f)
def wrapper(*args, **kwargs):
ff = functools.partial(f, *args, **kwargs)
if idaapi.is_main_thread():
return ff()
else:
return idaapi.execute_sync(ff, idaapi.MFF_FAST)
return wrapper
def idawrite_async(f):
"""
Decorator for marking a function as completely async.
"""
@functools.wraps(f)
def wrapper(*args, **kwargs):
ff = functools.partial(f, *args, **kwargs)
return idaapi.execute_sync(ff, idaapi.MFF_NOWAIT | idaapi.MFF_WRITE)
return wrapper
def idawrite(f):
"""
Decorator for marking a function as modifying the IDB.
"""
@functools.wraps(f)
def wrapper(*args, **kwargs):
ff = functools.partial(f, *args, **kwargs)
if idaapi.is_main_thread():
return ff()
else:
return idaapi.execute_sync(ff, idaapi.MFF_WRITE)
return wrapper
def idaread(f):
"""
Decorator for marking a function as reading from the IDB.
MFF_READ constant via: http://www.openrce.org/forums/posts/1827
"""
@functools.wraps(f)
def wrapper(*args, **kwargs):
ff = functools.partial(f, *args, **kwargs)
if idaapi.is_main_thread():
return ff()
else:
return idaapi.execute_sync(ff, idaapi.MFF_READ)
return wrapper
def mainthread(f):
"""
A debug decorator to assert main thread execution.
"""
def wrapper(*args, **kwargs):
assert idaapi.is_main_thread()
return f(*args, **kwargs)
return wrapper
def execute_sync(sync_flags=idaapi.MFF_FAST):
"""
Synchronization decorator capable of providing return values.
From https://github.com/vrtadmin/FIRST-plugin-ida
"""
def real_decorator(function):
@functools.wraps(function)
def wrapper(*args, **kwargs):
output = [None]
#
# this inline function definition is technically what will execute
# in the context of the main thread. we use this thunk to capture
# any output the function may want to return to the user.
#
def thunk():
output[0] = function(*args, **kwargs)
return 1
# already in the target (main) thread, execute thunk now
if idaapi.is_main_thread():
thunk()
# send the synchronization request to IDA
else:
idaapi.execute_sync(thunk, sync_flags)
# return the output of the synchronized function
return output[0]
return wrapper
return real_decorator
#------------------------------------------------------------------------------
# IDA Async Magic
#------------------------------------------------------------------------------
def await_future(future):
"""
This is effectively a technique I use to get around completely blocking
IDA's mainthread while waiting for a threaded result that may need to make
use of the execute_sync operators.
Waiting for a 'future' thread result to come through via this function
lets other execute_sync actions to slip through (at least Read, Fast).
"""
interval = 0.02 # the interval which we wait for a response
# run until the the future arrives
while True:
# block for a brief period to see if the future completes
try:
return future.get(timeout=interval)
#
# the future timed out, so perhaps it is blocked on a request
# to the mainthread. flush the requests now and try again
#
except Queue.Empty as e:
pass
logger.debug("Awaiting future...")
#
# if we are executing (well, blocking) as the main thread, we need
# to flush the event loop so IDA does not hang
#
if idaapi.is_main_thread():
flush_ida_sync_requests()
def await_lock(lock):
"""
Attempt to acquire a lock without blocking the IDA mainthread.
See await_future() for more details.
"""
elapsed = 0 # total time elapsed waiting for the lock
interval = 0.02 # the interval (in seconds) between acquire attempts
timeout = 60.0 # the total time allotted to acquiring the lock
end_time = time.time() + timeout
# wait until the the lock is available
while time.time() < end_time:
#
# attempt to acquire the given lock without blocking (via 'False').
# if we succesfully aquire the lock, then we can return (success)
#
if lock.acquire(False):
logger.debug("Acquired lock!")
return
#
# the lock is not available yet. we need to sleep so we don't choke
# the cpu, and try to acquire the lock again next time through...
#
logger.debug("Awaiting lock...")
time.sleep(interval)
#
# if we are executing (well, blocking) as the main thread, we need
# to flush the event loop so IDA does not hang
#
if idaapi.is_main_thread():
flush_ida_sync_requests()
#
# we spent 60 seconds trying to acquire the lock, but never got it...
# to avoid hanging IDA indefinitely (or worse), we abort via signal
#
raise RuntimeError("Failed to acquire lock after %f seconds!" % timeout)
@mainthread
def flush_ida_sync_requests():
"""
Flush all execute_sync requests.
"""
# this will trigger/flush the IDA UI loop
qta = QtCore.QCoreApplication.instance()
qta.processEvents()
#------------------------------------------------------------------------------
# IDA Util
#------------------------------------------------------------------------------
# taken from https://github.com/gaasedelen/prefix
PREFIX_DEFAULT = "MyPrefix"
PREFIX_SEPARATOR = '%'
def prefix_function(function_address, prefix):
"""
Prefix a function name with the given string.
"""
original_name = get_function_name(function_address)
new_name = str(prefix) + PREFIX_SEPARATOR + str(original_name)
# rename the function with the newly prefixed name
idaapi.set_name(function_address, new_name, idaapi.SN_NOWARN)
def prefix_functions(function_addresses, prefix):
"""
Prefix a list of functions with the given string.
"""
for function_address in function_addresses:
prefix_function(function_address, prefix)
def clear_prefix(function_address):
"""
Clear the prefix from a given function.
"""
original_name = get_function_name(function_address)
#
# locate the last (rfind) prefix separator in the function name as
# we will want to keep everything that comes after it
#
i = original_name.rfind(PREFIX_SEPARATOR)
# if there is no prefix (separator), there is nothing to trim
if i == -1:
return
# trim the prefix off the original function name and discard it
new_name = original_name[i+1:]
# rename the function with the prefix stripped
idaapi.set_name(function_address, new_name, idaapi.SN_NOWARN)
def clear_prefixes(function_addresses):
"""
Clear the prefix from a list of given functions.
"""
for function_address in function_addresses:
clear_prefix(function_address)
def get_function_name(function_address):
"""
Get a function's true name.
"""
# get the original function name from the database
if using_ida7api:
original_name = idaapi.get_name(function_address)
else:
original_name = idaapi.get_true_name(idaapi.BADADDR, function_address)
# sanity check
if original_name == None:
raise ValueError("Invalid function address")
# return the function name
return original_name
#------------------------------------------------------------------------------
# Interactive
#------------------------------------------------------------------------------
@mainthread
def prompt_string(label, title, default=""):
"""
Prompt the user with a dialog to enter a string.
This does not block the IDA main thread (unlike idaapi.askstr)
"""
dlg = QtWidgets.QInputDialog(None)
dlg.setWindowFlags(dlg.windowFlags() & ~QtCore.Qt.WindowContextHelpButtonHint)
dlg.setInputMode(QtWidgets.QInputDialog.TextInput)
dlg.setLabelText(label)
dlg.setWindowTitle(title)
dlg.setTextValue(default)
dlg.resize(
dlg.fontMetrics().averageCharWidth()*80,
dlg.fontMetrics().averageCharWidth()*10
)
ok = dlg.exec_()
text = str(dlg.textValue())
return (ok, text)
@mainthread
def gui_rename_function(function_address):
"""
Interactive rename of a function in the IDB.
"""
original_name = get_function_name(function_address)
# prompt the user for a new function name
ok, new_name = prompt_string(
"Please enter function name",
"Rename Function",
original_name
)
#
# if the user clicked cancel, or the name they entered
# is identical to the original, there's nothing to do
#
if not (ok or new_name != original_name):
return
# rename the function
idaapi.set_name(function_address, new_name, idaapi.SN_NOCHECK)
@mainthread
def gui_prefix_functions(function_addresses):
"""
Interactive prefixing of functions in the IDB.
"""
# prompt the user for a new function name
ok, prefix = prompt_string(
"Please enter a function prefix",
"Prefix Function(s)",
PREFIX_DEFAULT
)
# bail if the user clicked cancel or failed to enter a prefix
if not (ok and prefix):
return
# prefix the given functions with the user specified prefix
prefix_functions(function_addresses, prefix)
-117
View File
@@ -1,117 +0,0 @@
import idaapi
#------------------------------------------------------------------------------
# Compatability File
#------------------------------------------------------------------------------
#
# This file is used to reduce the number of compatibility checks made
# throughout Lighthouse for varying versions of IDA.
#
# As of July 2017, Lighthouse fully supports IDA 6.8 - 7.0. I expect that
# much of this compatibility layer and IDA 6.x support will be dropped for
# maintainability reasons sometime in 2018 as the userbase migrates up to
# IDA 7.0 and beyond.
#
# get the IDA version number
major, minor = map(int, idaapi.get_kernel_version().split("."))
#------------------------------------------------------------------------------
# IDA 7 API - COMPAT
#------------------------------------------------------------------------------
#
# In IDA 7.0, Hex-Rays refactored the IDA API quite a bit. This impacts
# Lighthouse in a few places, so we have had to apply a compatibility
# fixup to a few places throughout the code.
#
# We use the 'using_ida7api' global throughout the code to determine if
# the IDA 7 API is available, and should be used.
#
using_ida7api = (major > 6)
#------------------------------------------------------------------------------
# Pyside --> PyQt5 - COMPAT
#------------------------------------------------------------------------------
#
# As of IDA 6.9, Hex-Rays has started using PyQt5 versus PySide on Qt4.
#
# There are a few differences between these
# This file tries to help us cut back from having as much compatibility
# checks/churn by in every other file that consumes them.
#
using_pyqt5 = using_ida7api or (major == 6 and minor >= 9)
#
# From Qt4 --> Qt5, the organization of some of the code / objects has
# changed. We use this file to shim/re-alias a few of these to reduce the
# number of compatibility checks / code churn in the code that consumes them.
#
if using_pyqt5:
import PyQt5.QtGui as QtGui
import PyQt5.QtCore as QtCore
import PyQt5.QtWidgets as QtWidgets
else:
import PySide.QtGui as QtGui
import PySide.QtCore as QtCore
QtWidgets = QtGui
QtCore.pyqtSignal = QtCore.Signal
QtCore.pyqtSlot = QtCore.Slot
#------------------------------------------------------------------------------
# Dockable Widget Shim
#------------------------------------------------------------------------------
class DockableShim(object):
"""
A compatibility layer for dockable widgets (IDA 6.8 --> IDA 7.0)
IDA 7.0 got rid of 'TForms' and instead only uses TWidgets (QWidgets),
this class acts as a basic compatibility shim for IDA 6.8 --> IDA 7.0.
"""
def __init__(self, title, icon_path):
self._title = title
self._icon = QtGui.QIcon(icon_path)
# IDA 7+ Widgets
if using_ida7api:
import sip
self._form = idaapi.create_empty_widget(self._title)
self._widget = sip.wrapinstance(long(self._form), QtWidgets.QWidget) # NOTE: LOL
# legacy IDA PluginForm's
else:
self._form = idaapi.create_tform(self._title, None)
if using_pyqt5:
self._widget = idaapi.PluginForm.FormToPyQtWidget(self._form)
else:
self._widget = idaapi.PluginForm.FormToPySideWidget(self._form)
self._widget.setWindowIcon(self._icon)
def show(self):
"""
Show the dockable widget.
"""
# IDA 7+ Widgets
if using_ida7api:
flags = idaapi.PluginForm.WOPN_TAB | \
idaapi.PluginForm.WOPN_MENU | \
idaapi.PluginForm.WOPN_RESTORE | \
idaapi.PluginForm.WOPN_PERSIST
idaapi.display_widget(self._form, flags)
# legacy IDA PluginForm's
else:
flags = idaapi.PluginForm.FORM_TAB | \
idaapi.PluginForm.FORM_MENU | \
idaapi.PluginForm.FORM_RESTORE | \
idaapi.PluginForm.FORM_PERSIST | \
0x80 #idaapi.PluginForm.FORM_QWIDGET
idaapi.open_tform(self._form, flags)
-719
View File
@@ -1,719 +0,0 @@
import os
import idaapi
import idautils
from lighthouse.ui import *
from lighthouse.util import *
from lighthouse.parsers import *
from lighthouse.palette import LighthousePalette
from lighthouse.painting import CoveragePainter
from lighthouse.director import CoverageDirector
from lighthouse.coverage import DatabaseCoverage
from lighthouse.metadata import DatabaseMetadata, metadata_progress
# start the global logger *once*
if not logging_started():
logger = start_logging()
#------------------------------------------------------------------------------
# IDA Plugin
#------------------------------------------------------------------------------
PLUGIN_VERSION = "0.7.0"
AUTHORS = "Markus Gaasedelen"
DATE = "2017"
def PLUGIN_ENTRY():
"""
Required plugin entry point for IDAPython Plugins.
"""
return Lighthouse()
class Lighthouse(idaapi.plugin_t):
"""
The Lighthouse IDA Plugin.
"""
flags = idaapi.PLUGIN_PROC | idaapi.PLUGIN_MOD | idaapi.PLUGIN_HIDE
comment = "Code Coverage Explorer"
help = ""
wanted_name = "Lighthouse"
wanted_hotkey = ""
#--------------------------------------------------------------------------
# IDA Plugin Overloads
#--------------------------------------------------------------------------
def init(self):
"""
This is called by IDA when it is loading the plugin.
"""
# attempt plugin initialization
try:
self._install_plugin()
# failed to initialize or integrate the plugin, log and skip loading
except Exception as e:
logger.exception("Failed to initialize")
return idaapi.PLUGIN_SKIP
# plugin loaded successfully, print the Lighthouse banner
self.print_banner()
logger.info("Successfully initialized")
# tell IDA to keep the plugin loaded (everything is okay)
return idaapi.PLUGIN_KEEP
def run(self, arg):
"""
This is called by IDA when this file is loaded as a script.
"""
idaapi.warning("The Lighthouse plugin cannot be run as a script.")
def term(self):
"""
This is called by IDA when it is unloading the plugin.
"""
# attempt to cleanup and uninstall our plugin instance
try:
self._uninstall_plugin()
# failed to cleanly remove the plugin, log failure
except Exception as e:
logger.exception("Failed to cleanly unload the plugin")
logger.info("-"*75)
logger.info("Plugin terminated")
#--------------------------------------------------------------------------
# Initialization
#--------------------------------------------------------------------------
def _install_plugin(self):
"""
Initialize & integrate the plugin into IDA.
"""
self._init()
self._install_ui()
def _init(self):
"""
Initialize plugin members.
"""
# plugin color palette
self.palette = LighthousePalette()
# the coverage engine
self.director = CoverageDirector(self.palette)
# the coverage painter
self.painter = CoveragePainter(self.director, self.palette)
# the coverage overview widget
self._ui_coverage_overview = None
# menu entry icons
self._icon_id_file = idaapi.BADADDR
self._icon_id_batch = idaapi.BADADDR
self._icon_id_overview = idaapi.BADADDR
# the directory to start the coverage file dialog in
self._last_directory = idautils.GetIdbDir()
def _install_ui(self):
"""
Initialize & integrate all UI elements.
"""
self._install_load_file()
self._install_load_batch()
self._install_open_coverage_overview()
def print_banner(self):
"""
Print the Lighthouse plugin banner.
"""
# build the main banner title
banner_params = (PLUGIN_VERSION, AUTHORS, DATE)
banner_title = "Lighthouse v%s - (c) %s - %s" % banner_params
# print plugin banner
lmsg("")
lmsg("-"*75)
lmsg("---[ %s" % banner_title)
lmsg("-"*75)
lmsg("")
#--------------------------------------------------------------------------
# Termination
#--------------------------------------------------------------------------
def _uninstall_plugin(self):
"""
Cleanup & uninstall the plugin from IDA.
"""
self._uninstall_ui()
self._cleanup()
def _uninstall_ui(self):
"""
Cleanup & uninstall the plugin UI from IDA.
"""
self._uninstall_open_coverage_overview()
self._uninstall_load_batch()
self._uninstall_load_file()
def _cleanup(self):
"""
IDB closing event, last chance to spin down threaded workers.
"""
self.painter.terminate()
self.director.terminate()
#--------------------------------------------------------------------------
# IDA Actions
#--------------------------------------------------------------------------
ACTION_LOAD_FILE = "lighthouse:load_file"
ACTION_LOAD_BATCH = "lighthouse:load_batch"
ACTION_COVERAGE_OVERVIEW = "lighthouse:coverage_overview"
def _install_load_file(self):
"""
Install the 'File->Load->Code coverage file...' menu entry.
"""
# create a custom IDA icon
icon_path = plugin_resource(os.path.join("icons", "load.png"))
icon_data = str(open(icon_path, "rb").read())
self._icon_id_file = idaapi.load_custom_icon(data=icon_data)
# describe a custom IDA UI action
action_desc = idaapi.action_desc_t(
self.ACTION_LOAD_FILE, # The action name.
"~C~ode coverage file...", # The action text.
IDACtxEntry(self.interactive_load_file), # The action handler.
None, # Optional: action shortcut
"Load individual code coverage file(s)", # Optional: tooltip
self._icon_id_file # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register load_file action with IDA")
# attach the action to the File-> dropdown menu
result = idaapi.attach_action_to_menu(
"File/Load file/", # Relative path of where to add the action
self.ACTION_LOAD_FILE, # The action ID (see above)
idaapi.SETMENU_APP # We want to append the action after ^
)
if not result:
RuntimeError("Failed action attach load_file")
logger.info("Installed the 'Code coverage file' menu entry")
def _install_load_batch(self):
"""
Install the 'File->Load->Code coverage batch...' menu entry.
"""
# create a custom IDA icon
icon_path = plugin_resource(os.path.join("icons", "batch.png"))
icon_data = str(open(icon_path, "rb").read())
self._icon_id_batch = idaapi.load_custom_icon(data=icon_data)
# describe a custom IDA UI action
action_desc = idaapi.action_desc_t(
self.ACTION_LOAD_BATCH, # The action name.
"~C~ode coverage batch...", # The action text.
IDACtxEntry(self.interactive_load_batch), # The action handler.
None, # Optional: action shortcut
"Load and aggregate code coverage files", # Optional: tooltip
self._icon_id_batch # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register load_batch action with IDA")
# attach the action to the File-> dropdown menu
result = idaapi.attach_action_to_menu(
"File/Load file/", # Relative path of where to add the action
self.ACTION_LOAD_BATCH, # The action ID (see above)
idaapi.SETMENU_APP # We want to append the action after ^
)
if not result:
RuntimeError("Failed action attach load_batch")
logger.info("Installed the 'Code coverage batch' menu entry")
def _install_open_coverage_overview(self):
"""
Install the 'View->Open subviews->Coverage Overview' menu entry.
"""
# create a custom IDA icon
icon_path = plugin_resource(os.path.join("icons", "overview.png"))
icon_data = str(open(icon_path, "rb").read())
self._icon_id_overview = idaapi.load_custom_icon(data=icon_data)
# describe a custom IDA UI action
action_desc = idaapi.action_desc_t(
self.ACTION_COVERAGE_OVERVIEW, # The action name.
"~C~overage Overview", # The action text.
IDACtxEntry(self.open_coverage_overview), # The action handler.
None, # Optional: action shortcut
"Open database code coverage overview", # Optional: tooltip
self._icon_id_overview # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register open coverage overview action with IDA")
# attach the action to the View-> dropdown menu
result = idaapi.attach_action_to_menu(
"View/Open subviews/Hex dump", # Relative path of where to add the action
self.ACTION_COVERAGE_OVERVIEW, # The action ID (see above)
idaapi.SETMENU_INS # We want to insert the action before ^
)
if not result:
RuntimeError("Failed action attach to 'View/Open subviews' dropdown")
logger.info("Installed the 'Coverage Overview' menu entry")
def _uninstall_load_file(self):
"""
Remove the 'File->Load file->Code coverage file...' menu entry.
"""
# remove the entry from the File-> menu
result = idaapi.detach_action_from_menu(
"File/Load file/",
self.ACTION_LOAD_FILE
)
if not result:
return False
# unregister the action
result = idaapi.unregister_action(self.ACTION_LOAD_FILE)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_file)
self._icon_id_file = idaapi.BADADDR
logger.info("Uninstalled the 'Code coverage file' menu entry")
def _uninstall_load_batch(self):
"""
Remove the 'File->Load file->Code coverage batch...' menu entry.
"""
# remove the entry from the File-> menu
result = idaapi.detach_action_from_menu(
"File/Load file/",
self.ACTION_LOAD_BATCH
)
if not result:
return False
# unregister the action
result = idaapi.unregister_action(self.ACTION_LOAD_BATCH)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_batch)
self._icon_id_batch = idaapi.BADADDR
logger.info("Uninstalled the 'Code coverage batch' menu entry")
def _uninstall_open_coverage_overview(self):
"""
Remove the 'View->Open subviews->Coverage Overview' menu entry.
"""
# remove the entry from the View-> menu
result = idaapi.detach_action_from_menu(
"View/Open subviews/Hex dump",
self.ACTION_COVERAGE_OVERVIEW
)
if not result:
return False
# unregister the action
result = idaapi.unregister_action(self.ACTION_COVERAGE_OVERVIEW)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_overview)
self._icon_id_overview = idaapi.BADADDR
logger.info("Uninstalled the 'Coverage Overview' menu entry")
#--------------------------------------------------------------------------
# UI - Actions
#--------------------------------------------------------------------------
def open_coverage_overview(self):
"""
Open the 'Coverage Overview' dialog.
"""
self.palette.refresh_colors()
# the coverage overview is already open & visible, simply refresh it
if self._ui_coverage_overview and self._ui_coverage_overview.isVisible():
self._ui_coverage_overview.refresh()
return
# create a new coverage overview if there is not one visible
self._ui_coverage_overview = CoverageOverview(self.director)
self._ui_coverage_overview.show()
def interactive_load_batch(self):
"""
Interactive loading & aggregation of coverage files.
"""
self.palette.refresh_colors()
#
# kick off an asynchronous metadata refresh. this collects underlying
# database metadata while the user will be busy selecting coverage files.
#
future = self.director.refresh_metadata(progress_callback=metadata_progress)
#
# we will now prompt the user with an interactive file dialog so they
# can select the coverage files they would like to load from disk.
#
loaded_files = self._select_and_load_coverage_files()
# if no valid coveragee files were selected (and loaded), bail
if not loaded_files:
self.director.metadata.abort_refresh()
return
# prompt the user to name the new coverage aggregate
default_name = "BATCH_%s" % self.director.peek_shorthand()
ok, coverage_name = prompt_string(
"Batch Name:",
"Please enter a name for this coverage",
default_name
)
# if user didn't enter a name for the batch, or hit cancel, we abort
if not (ok and coverage_name):
lmsg("Aborting batch load...")
return
#
# to continue any further, we need the database metadata. hopefully
# it has finished with its asynchronous collection, otherwise we will
# block until it completes. the user will be shown a progress dialog.
#
idaapi.show_wait_box("Building database metadata...")
await_future(future)
# aggregate all the selected files into one new coverage set
new_coverage = self._aggregate_batch(loaded_files)
# inject the the aggregated coverage set
idaapi.replace_wait_box("Mapping coverage...")
self.director.create_coverage(coverage_name, new_coverage.data)
# select the newly created batch coverage
idaapi.replace_wait_box("Selecting coverage...")
self.director.select_coverage(coverage_name)
# all done, hide the IDA wait box
idaapi.hide_wait_box()
lmsg("Successfully loaded batch %s..." % coverage_name)
# show the coverage overview
self.open_coverage_overview()
def _aggregate_batch(self, loaded_files):
"""
Aggregate the given loaded_files data into a single coverage object.
"""
idaapi.replace_wait_box("Aggregating coverage batch...")
# create a new coverage set to manually aggregate data into
coverage = DatabaseCoverage({}, self.palette)
#
# loop through the coverage data we have loaded from disk, and begin
# the normalization process to translate / filter / flatten it for
# insertion into the director (as a list of instruction addresses)
#
for i, data in enumerate(loaded_files, 1):
# keep the user informed about our progress while loading coverage
idaapi.replace_wait_box(
"Aggregating batch data %u/%u" % (i, len(loaded_files))
)
# normalize coverage data to the open database
try:
addresses = self._normalize_coverage(data, self.director.metadata)
# normalization failed, print & log it
except Exception as e:
lmsg("Failed to map coverage %s" % data.filepath)
lmsg("- %s" % e)
logger.exception("Error details:")
continue
# aggregate the addresses into the output coverage object
coverage.add_addresses(addresses, False)
# return the created coverage name
return coverage
def interactive_load_file(self):
"""
Interactive loading of individual coverage files.
"""
self.palette.refresh_colors()
created_coverage = []
#
# kick off an asynchronous metadata refresh. this collects underlying
# database metadata while the user will be busy selecting coverage files.
#
future = self.director.refresh_metadata(progress_callback=metadata_progress)
#
# we will now prompt the user with an interactive file dialog so they
# can select the coverage files they would like to load from disk.
#
loaded_files = self._select_and_load_coverage_files()
# if no valid coveragee files were selected (and loaded), bail
if not loaded_files:
self.director.metadata.abort_refresh()
return
#
# to continue any further, we need the database metadata. hopefully
# it has finished with its asynchronous collection, otherwise we will
# block until it completes. the user will be shown a progress dialog.
#
idaapi.show_wait_box("Building database metadata...")
await_future(future)
#
# stop the director's aggregate from updating. this is in the interest
# of better performance when loading more than one new coverage set
# into the director.
#
self.director.suspend_aggregation()
#
# loop through the coverage data we have loaded from disk, and begin
# the normalization process to translate / filter / flatten its blocks
# into a generic format the director can understand (a list of addresses)
#
for i, data in enumerate(loaded_files, 1):
# keep the user informed about our progress while loading coverage
idaapi.replace_wait_box(
"Normalizing and mapping coverage %u/%u" % (i, len(loaded_files))
)
# normalize coverage data to the open database
try:
addresses = self._normalize_coverage(data, self.director.metadata)
except Exception as e:
lmsg("Failed to map coverage %s" % data.filepath)
lmsg("- %s" % e)
logger.exception("Error details:")
continue
#
# ask the director to create and track a new coverage set from
# the normalized coverage data we provide
#
coverage_name = os.path.basename(data.filepath)
self.director.create_coverage(coverage_name, addresses)
# save the coverage name to the list of succesful loads
created_coverage.append(coverage_name)
#
# resume the director's aggregation capabilities, triggering an update
# to recompute the aggregate with the newly loaded coverage
#
idaapi.replace_wait_box("Recomputing coverage aggregate...")
self.director.resume_aggregation()
# if nothing was mapped, then there's nothing else to do
if not created_coverage:
lmsg("No coverage files could be mapped...")
idaapi.hide_wait_box()
return
#
# select one (the first) of the newly loaded coverage file(s)
#
idaapi.replace_wait_box("Selecting coverage...")
self.director.select_coverage(created_coverage[0])
# all done, hide the IDA wait box
idaapi.hide_wait_box()
lmsg("Successfully loaded %u coverage file(s)..." % len(created_coverage))
# show the coverage overview
self.open_coverage_overview()
def _select_and_load_coverage_files(self):
"""
Interactive coverage file selection.
"""
#
# prompt the user with a QtFileDialog so that they can select any
# number of coverage files to load at once.
#
# if no files are selected, we abort the coverage loading process.
#
filenames = self._select_coverage_files()
if not filenames:
return None
# load the selected coverage files from disk and return them
return self._load_coverage_files(filenames)
def _select_coverage_files(self):
"""
Open the 'Load Code Coverage' dialog and capture file selections.
"""
# create & configure a Qt File Dialog for immediate use
file_dialog = QtWidgets.QFileDialog(
None,
'Open code coverage file',
self._last_directory,
'All Files (*.*)'
)
file_dialog.setFileMode(QtWidgets.QFileDialog.ExistingFiles)
# prompt the user with the file dialog, and await filename(s)
filenames, _ = file_dialog.getOpenFileNames()
#
# remember the last directory we were in (parsed from a selected file)
# for the next time the user comes to load coverage files
#
if filenames:
self._last_directory = os.path.dirname(filenames[0]) + os.sep
# log the captured (selected) filenames from the dialog
logger.debug("Captured filenames from file dialog:")
logger.debug('\n - ' + '\n - '.join(filenames))
# return the captured filenames
return filenames
#--------------------------------------------------------------------------
# Misc
#--------------------------------------------------------------------------
#
# NOTE / FUTURE / TODO
#
# In my vision for Lighthouse, I always imagined that it would be
# able to dynamically detect and load coverage data from a variety of
# different coverage sources and formats (DR, PIN, an inst trace, etc)
#
# The dream was that Lighthouse would have a folder of loaders to parse
# and normalize their data to the database / loaded executable so that
# they can be injected into the director for exploration.
#
# I would still like to do this, but really haven't heard many people
# asking for additional coverage source support yet... so this feature
# keeps getting pushed back.
#
# ...
#
# In the mean time, we have a few random functions that are hardcoded
# here to load DrCov files and normalize them to the current databasae.
#
def _load_coverage_files(self, filenames):
"""
Load multiple code coverage files from disk.
"""
loaded_coverage = []
#
# loop through each of the given filenames and attempt to load/parse
# their coverage data from disk
#
for filename in filenames:
# attempt to load/parse a single coverage data file from disk
try:
coverage_data = self._load_coverage_file(filename)
# catch all for parse errors / bad input / malformed files
except Exception as e:
lmsg("Failed to load coverage %s" % filename)
logger.exception("Error details:")
continue
# save the loaded coverage data to the output list
loaded_coverage.append(coverage_data)
# return all the succesfully loaded coverage files
return loaded_coverage
def _load_coverage_file(self, filename):
"""
Load a single code coverage file from disk.
"""
return DrcovData(filename)
def _normalize_coverage(self, coverage_data, metadata):
"""
Normalize loaded DrCov data to the database metadata.
"""
# extract the coverage relevant to this IDB (well, the root binary)
root_filename = idaapi.get_root_filename()
coverage_blocks = coverage_data.get_blocks_by_module(root_filename)
# rebase the basic blocks
base = idaapi.get_imagebase()
rebased_blocks = rebase_blocks(base, coverage_blocks)
# coalesce the blocks into larger contiguous blobs
condensed_blocks = coalesce_blocks(rebased_blocks)
# flatten the blobs into individual instructions or addresses
return metadata.flatten_blocks(condensed_blocks)
+1
View File
@@ -0,0 +1 @@
get_context = lambda x: None
@@ -34,6 +34,7 @@ class TextToken(object):
# NOTE: this is now dynamically computed in parse(...)
#COVERAGE_TOKEN = r'(?P<COVERAGE_TOKEN>[A-Za-z])'
AGGREGATE_TOKEN = '*'
#
# LOGIC_TOKEN:
@@ -117,21 +118,6 @@ class TokenLogicOperator(AstToken):
return operator.sub
raise ValueError("Unknown Operator")
class TokenCoverageRange(AstToken):
"""
AST Token for a coverage range reference.
eg: 'A,Z'
"""
def __init__(self, start, comma, end):
super(TokenCoverageRange, self).__init__()
self.text_tokens = [start, comma, end]
# referenced coverage sets
self.symbol_start = start.value.upper()
self.symbol_end = end.value.upper()
class TokenCoverageSingle(AstToken):
"""
AST Token for a single coverage reference.
@@ -182,28 +168,25 @@ def _ast_equal_recursive(first, second):
if type(first) != type(second):
return False
#
# if both tokens are terminating / None, they are a match
#
if first == second == None:
return True
#
# if the current node is a logic operator, we need to evaluate the
# expressions that make up its input.
#
if isinstance(first, TokenLogicOperator):
elif isinstance(first, TokenLogicOperator):
if not _ast_equal_recursive(first.op1, second.op1):
return False
if not _ast_equal_recursive(first.op2, second.op2):
return False
return first.operator == second.operator
#
# if the current node is a coverage range, we need to evaluate the
# range expression. this will produce an aggregate coverage set
# described by the start/end of the range (Eg, 'A,D')
#
elif isinstance(first, TokenCoverageRange):
return first.symbol_start == second.symbol_start and \
first.symbol_end == second.symbol_end
#
# if the current node is a coverage token, we need simply need
# to compare its symbol.
@@ -216,7 +199,7 @@ def _ast_equal_recursive(first, second):
# unknown token? (this should never happen)
#
raise False
raise ValueError("Unknown token types, cannot compare them...")
#------------------------------------------------------------------------------
# Parsing
@@ -273,17 +256,11 @@ class CompositionParser(object):
EXPRESSION:
'(' EXPRESSION ')' COMPOSITION_TAIL | COVERAGE COMPOSITION_TAIL
COVERAGE:
COVERAGE_TOKEN COVERAGE_RANGE
COVERAGE_RANGE:
',' COVERAGE_TOKEN | None
COVERAGE_TOKEN:
'A' | 'B' | 'C' | ... | 'Z'
'A' | 'B' | 'C' | ... | 'Z' | AGGREGATE_TOKEN
LOGIC_TOKEN:
'&' | '|' | '^' | '-'
'&' | '|' | '^' | '-' | None
"""
@@ -310,10 +287,10 @@ class CompositionParser(object):
# reflect the state of loaded coverage
#
COVERAGE_TOKEN = r'(?P<COVERAGE_TOKEN>[%s])' % ''.join(coverage_tokens)
COVERAGE_TOKEN = r'(?P<COVERAGE_TOKEN>[%s])' % ''.join(coverage_tokens + [AGGREGATE_TOKEN])
#
# if there were any coverage tokens defined, then we definitily need
# if there were any coverage tokens defined, then we definitely need
# the constructed COVERAGE_TOKEN regex in our grammar list.
#
@@ -420,7 +397,7 @@ class CompositionParser(object):
def _EXPRESSION(self):
"""
EXPRESSION:
'(' EXPRESSION ')' COMPOSITION_TAIL | COVERAGE COMPOSITION_TAIL
'(' EXPRESSION ')' COMPOSITION_TAIL | COVERAGE_TOKEN COMPOSITION_TAIL
"""
#
@@ -449,49 +426,24 @@ class CompositionParser(object):
#
else:
expression = self._COVERAGE()
expression = self._COVERAGE_TOKEN()
# ... [COMPOSITION_TAIL]
return self._COMPOSITION_TAIL(expression)
def _COVERAGE(self):
"""
COVERAGE:
COVERAGE_TOKEN COVERAGE_RANGE
"""
coverage_start = self._COVERAGE_TOKEN()
coverage_range = self._COVERAGE_RANGE()
# if a there was a trailing ',A-Za-z' parsed, it's a coverage range
if coverage_range:
comma, coverage_end = coverage_range
return TokenCoverageRange(coverage_start, comma, coverage_end)
# return a single coverage set
return TokenCoverageSingle(coverage_start)
def _COVERAGE_RANGE(self):
"""
COVERAGE_RANGE:
',' COVERAGE_TOKEN | None
"""
if self._accept("COMMA"):
return (self.current_token, self._COVERAGE_TOKEN())
return None
def _COVERAGE_TOKEN(self):
"""
COVERAGE_TOKEN:
'A' | 'B' | 'C' | ... | 'Z'
'A' | 'B' | 'C' | ... | 'Z' | AGGREGATE_TOKEN
"""
if self._accept("COVERAGE_TOKEN"):
return self.current_token
return TokenCoverageSingle(self.current_token)
self._parse_error("Expected COVERAGE_TOKEN", TokenCoverageSingle)
def _LOGIC_TOKEN(self):
"""
LOGIC_TOKEN:
'&' | '|' | '^' | '-'
'&' | '|' | '^' | '-' | None
"""
if self._accept("OR") or \
self._accept("XOR") or \
@@ -1,5 +1,7 @@
from .parser import *
from lighthouse.util import *
from lighthouse.util.qt import *
from lighthouse.util.disassembler import disassembler
#------------------------------------------------------------------------------
# Composing Shell
@@ -19,15 +21,15 @@ class ComposingShell(QtWidgets.QWidget):
independent, but obviously must communicate with the director.
"""
def __init__(self, director, model, table=None):
def __init__(self, lctx, table_model, table_view=None):
super(ComposingShell, self).__init__()
self.setObjectName(self.__class__.__name__)
# external entities
self._director = director
self._palette = director._palette
self._model = model
self._table = table
self._director = lctx.director
self._palette = lctx.palette
self._table_model = table_model
self._table_view = table_view
# command / input
self._search_text = ""
@@ -44,6 +46,7 @@ class ComposingShell(QtWidgets.QWidget):
# configure the widget for use
self._ui_init()
self.refresh_theme()
#--------------------------------------------------------------------------
# Properties
@@ -67,6 +70,7 @@ class ComposingShell(QtWidgets.QWidget):
# initialize a monospace font to use with our widget(s)
self._font = MonospaceFont()
self._font.setPointSizeF(normalize_to_dpi(10))
self._font_metrics = QtGui.QFontMetricsF(self._font)
# initialize our ui elements
@@ -83,30 +87,18 @@ class ComposingShell(QtWidgets.QWidget):
# the composer label at the head of the shell
self._line_label = QtWidgets.QLabel("Composer")
self._line_label.setStyleSheet("QLabel { margin: 0 1ex 0 1ex }")
self._line_label.setAlignment(QtCore.Qt.AlignVCenter | QtCore.Qt.AlignHCenter)
self._line_label.setAlignment(QtCore.Qt.AlignCenter)
self._line_label.setFont(self._font)
self._line_label.setFixedWidth(self._line_label.sizeHint().width())
# the text box / shell / ComposingLine
self._line = ComposingLine()
# configure the shell background & default text color
palette = self._line.palette()
palette.setColor(QtGui.QPalette.Base, self._palette.overview_bg)
palette.setColor(QtGui.QPalette.Text, self._palette.composer_fg)
palette.setColor(QtGui.QPalette.WindowText, self._palette.composer_fg)
self._line.setPalette(palette)
def _ui_init_completer(self):
"""
Initialize the coverage hint UI elements.
"""
# NOTE/COMPAT:
if using_pyqt5:
self._completer_model = QtCore.QStringListModel([])
else:
self._completer_model = QtGui.QStringListModel([])
self._completer_model = QtCore.QStringListModel([])
self._completer = QtWidgets.QCompleter(self)
self._completer.setCompletionMode(QtWidgets.QCompleter.PopupCompletion)
@@ -135,16 +127,17 @@ class ComposingShell(QtWidgets.QWidget):
self._director.coverage_created(self._internal_refresh)
self._director.coverage_deleted(self._internal_refresh)
self._director.coverage_modified(self._internal_refresh)
self._director.coverage_switched(self._coverage_switched)
# register for cues from the model
self._model.layoutChanged.connect(self._ui_shell_text_changed)
self._table_model.layoutChanged.connect(self._ui_shell_text_changed)
def _ui_layout(self):
"""
Layout the major UI elements of the widget.
"""
# create a qt layout for the 'compser' (the shell)
# create a qt layout for the 'composer' (the shell)
layout = QtWidgets.QHBoxLayout()
layout.setContentsMargins(0,0,0,0)
@@ -169,26 +162,81 @@ class ComposingShell(QtWidgets.QWidget):
"""
self._internal_refresh()
@idafast
@disassembler.execute_ui
def refresh_theme(self):
"""
Refresh UI facing elements to reflect the current theme.
"""
assert (self._line and self._completer), "UI not yet initialized..."
# configure the shell background & default text color
qpal = self._line.palette()
qpal.setColor(QtGui.QPalette.Text, self._palette.shell_text)
qpal.setColor(QtGui.QPalette.WindowText, self._palette.shell_text)
self._line.setPalette(qpal)
# set other hard to access shell theme elements
self._line.setStyleSheet(
"QPlainTextEdit {"
" color: %s;" % self._palette.shell_text.name() + # this line ensures the text cursor changes color, with the theme
" background-color: %s;" % self._palette.shell_background.name() +
" border: 1px solid %s;" % self._palette.shell_border.name() +
"} "
"QPlainTextEdit:hover, QPlainTextEdit:focus {"
" border: 1px solid %s;" % self._palette.shell_border_focus.name() +
"}"
)
# refresh completer popup style...
self._completer.popup().setStyleSheet(
"background: %s;" % self._palette.shell_hint_background.name() +
"color: %s;" % self._palette.shell_hint_text.name()
)
@disassembler.execute_ui
def _internal_refresh(self):
"""
Internal refresh of the shell.
"""
self._refresh_hint_list()
self._ui_shell_text_changed()
def _refresh_hint_list(self):
"""
Refresh the shell coverage hint contents.
"""
hints = []
self._shorthand = []
# get the most recent coverage strings from the director
detailed_strings = [self._director.get_coverage_string(x) for x in self._director.coverage_names]
self._completer_model.setStringList(detailed_strings)
self._shorthand = [x[0] for x in detailed_strings]
# get the detailed coverage strings from the director
for x in self._director.coverage_names:
hints.append(self._director.get_coverage_string(x))
symbol = self._director.get_shorthand(x)
if symbol:
self._shorthand.append(symbol)
# install the fresh coverage strings to the hint completer dialog
self._completer_model.setStringList(hints)
# queue a UI coverage hint if necessary
self._ui_hint_coverage_refresh()
def _coverage_switched(self):
"""
Handle a coverage switched event.
specifically, we want cover the specical case where the hot shell is
being switched to. In these cases, we should forcefully clear the
'last' AST so that the full shell expression is re-evaluated and
sent forward to the director.
this will ensure that the director will evaluate and display the
results of the present expression as the 'Hot Shell' is now active.
"""
if self._director.coverage_name == "Hot Shell":
self._last_ast = None
self._internal_refresh()
#--------------------------------------------------------------------------
# Signal Handlers
#--------------------------------------------------------------------------
@@ -243,13 +291,13 @@ class ComposingShell(QtWidgets.QWidget):
# not a search query clear any lingering filters for it
else:
self._model.filter_string("")
self._table_model.filter_string("")
#
# a Jump, eg '0x804010a' or 'sub_1400016F0'
#
if self.is_jump(text) and self._table:
if self.is_jump(text) and self._table_view:
self._line_label.setText("Jump")
self._highlight_jump()
return
@@ -276,7 +324,7 @@ class ComposingShell(QtWidgets.QWidget):
return
# jump to the function entry containing the requested address
if self.is_jump(text) and self._table:
if self.is_jump(text) and self._table_view:
self._execute_jump(text)
return
@@ -348,10 +396,10 @@ class ComposingShell(QtWidgets.QWidget):
"""
# the given text is a real search query, apply it as a filter now
self._model.filter_string(self._search_text)
self._table_model.filter_string(self._search_text)
# compute coverage % of the visible (filtered) results
percent = self._model.get_modeled_coverage_percent()
percent = self._table_model.get_modeled_coverage_percent()
# show the coverage % of the search results in the shell label
self._line_label.setText("%1.2f%%" % percent)
@@ -368,10 +416,10 @@ class ComposingShell(QtWidgets.QWidget):
self._color_clear()
# color search based on if there are any matching results
if self._model.rowCount():
self._color_text(self._palette.valid_text, start=1)
if self._table_model.rowCount():
self._color_text(self._palette.shell_text_valid, start=1)
else:
self._color_text(self._palette.invalid_text, start=1)
self._color_text(self._palette.shell_text_invalid, start=1)
################# UPDATES ENABLED #################
self._line.setUpdatesEnabled(True)
@@ -426,9 +474,9 @@ class ComposingShell(QtWidgets.QWidget):
except ValueError:
pass
else:
function_metadata = self._director.metadata.get_function(address)
if function_metadata:
return function_metadata.address
functions = self._director.metadata.get_functions_containing(address)
if functions:
return functions[0].address
#
# the user string did not translate to a parsable hex number (address)
@@ -440,16 +488,29 @@ class ComposingShell(QtWidgets.QWidget):
# special case to make 'sub_*' prefixed user inputs case insensitive
if text.lower().startswith("sub_"):
text = "sub_" + text[4:].upper()
# look up the text function name within the director's metadata
# attempt uppercase hex (IDA...)
function_metadata = self._director.metadata.get_function_by_name("sub_" + text[4:].upper())
if function_metadata:
return function_metadata.address
# attempt lowercase hex (Binja...)
function_metadata = self._director.metadata.get_function_by_name("sub_" + text[4:].lower())
if function_metadata:
return function_metadata.address
#
# no luck yet, let's just throw the user's raw text at the lookup. this
# would probably be a function they renamed, such as 'foobar'
#
function_metadata = self._director.metadata.get_function_by_name(text)
if function_metadata:
return function_metadata.address
#
# the user string did not translate to a function name that could
# be found in the director.
# be found in the director. so I guess they're not trying to jump...
#
# failure, the user input (text) isn't a jump ...
@@ -459,16 +520,16 @@ class ComposingShell(QtWidgets.QWidget):
"""
Execute the jump semantics.
"""
assert self._table
assert self._table_view
# retrieve the jump target
function_address = self._compute_jump(text)
assert function_address
# select the function entry in the coverage overview table
self._table.selectRow(self._model.func2row[function_address])
self._table.scrollTo(
self._table.currentIndex(),
self._table_view.selectRow(self._table_model.func2row[function_address])
self._table_view.scrollTo(
self._table_view.currentIndex(),
QtWidgets.QAbstractItemView.PositionAtCenter
)
@@ -484,7 +545,7 @@ class ComposingShell(QtWidgets.QWidget):
self._color_clear()
# color jump
self._color_text(self._palette.valid_text)
self._color_text(self._palette.shell_text_valid)
################# UPDATES ENABLED #################
self._line.setUpdatesEnabled(True)
@@ -575,7 +636,7 @@ class ComposingShell(QtWidgets.QWidget):
#
# While the user is picking a name for the new composite, we might as well
# try and cache it asynchronously :-). kick the caching off now.
# try and compute/cache it asynchronously :-). kick the caching off now.
#
self._director.cache_composition(self._last_ast, force=True)
@@ -593,13 +654,27 @@ class ComposingShell(QtWidgets.QWidget):
"COMP_%s" % self.text
)
# the user did not enter a coverage name or hit cancel - abort the save
#
# once the naming prompt closes, the composing shell tries to pop
# the coverage hint again which can make it annoying and too
# aggressive.
#
# clearing focus on the text line will ensure the hint does not pop
#
self._line.clearFocus()
#
# returning back to the naming prompt, if the user did not enter a
# coverage name (or hit cancel), we will abort saving the composition
#
if not (ok and coverage_name):
return
#
# all good, ask the director to save the last composition
# composition under the given coverage name
# a name was given and all is good, ask the director to save the last
# composition under the user specified coverage name
#
self._director.add_composition(coverage_name, self._last_ast)
@@ -621,7 +696,20 @@ class ComposingShell(QtWidgets.QWidget):
# as it frequently gets in the way and is really annoying...
#
if not (self._line.hasFocus() or self.text):
if not (self._line.hasFocus() and self.text):
self._ui_hint_coverage_hide()
return
#
# if the text cursor is moving and the user has their left mouse
# button held, then they are probably doing a click + drag text
# selection so we shouldn't be naggin them with hints and stuff
#
# without this condition, click+drag selection gets really choppy
#
if QtWidgets.QApplication.mouseButtons() & QtCore.Qt.LeftButton:
self._ui_hint_coverage_hide()
return
# scrape info from the current shell text state
@@ -781,7 +869,7 @@ class ComposingShell(QtWidgets.QWidget):
# configure the colors/style for this explicit token
#highlight.setBackground(QtGui.QBrush(QtGui.QColor(TOKEN_COLORS[token.type])))
highlight.setForeground(QtGui.QBrush(QtGui.QColor(TOKEN_COLORS[token.type])))
highlight.setForeground(QtGui.QBrush(TOKEN_COLORS[token.type]))
cursor.setCharFormat(highlight)
#
@@ -821,10 +909,10 @@ class ComposingShell(QtWidgets.QWidget):
cursor_position = cursor.position()
# setup the invalid text highlighter
invalid_color = self._palette.invalid_highlight
invalid_color = self._palette.shell_highlight_invalid
highlight = QtGui.QTextCharFormat()
highlight.setFontWeight(QtGui.QFont.Bold)
highlight.setBackground(QtGui.QBrush(QtGui.QColor(invalid_color)))
highlight.setBackground(QtGui.QBrush(invalid_color))
self._line.blockSignals(True)
################# UPDATES DISABLED #################
@@ -873,7 +961,7 @@ class ComposingShell(QtWidgets.QWidget):
# setup a simple font coloring (or clearing) text format
simple = QtGui.QTextCharFormat()
if color:
simple.setForeground(QtGui.QBrush(QtGui.QColor(color)))
simple.setForeground(QtGui.QBrush(color))
self._line.blockSignals(True)
################# UPDATES DISABLED #################
@@ -941,6 +1029,7 @@ class ComposingLine(QtWidgets.QPlainTextEdit):
# initialize a monospace font to use with our widget(s)
self._font = MonospaceFont()
self._font.setPointSizeF(normalize_to_dpi(10))
self._font_metrics = QtGui.QFontMetricsF(self._font)
self.setFont(self._font)
@@ -955,8 +1044,8 @@ class ComposingLine(QtWidgets.QPlainTextEdit):
# set the height of the textbox based on some arbitrary math :D
LINE_PADDING = self.document().documentMargin()*2
line_height = self._font_metrics.height() + LINE_PADDING - 2
self.setFixedHeight(line_height)
line_height = self._font_metrics.height() + LINE_PADDING + 2
self.setFixedHeight(int(line_height))
#--------------------------------------------------------------------------
# QPlainTextEdit Overloads
@@ -972,7 +1061,7 @@ class ComposingLine(QtWidgets.QPlainTextEdit):
e.key() == QtCore.Qt.Key_Enter:
#
# fire our convenience signal notifying listerns that the user
# fire our convenience signal notifying listeners that the user
# pressed enter. this signal firing indicates the user is
# probably trying to complete their query / input.
#
+111
View File
@@ -0,0 +1,111 @@
import os
import logging
from lighthouse.util.qt import *
from lighthouse.painting import CoveragePainter
from lighthouse.director import CoverageDirector
from lighthouse.coverage import DatabaseCoverage
from lighthouse.metadata import DatabaseMetadata
from lighthouse.util.disassembler import disassembler, DisassemblerContextAPI
logger = logging.getLogger("Lighthouse.Context")
#------------------------------------------------------------------------------
# Lighthouse Session Context
#------------------------------------------------------------------------------
class LighthouseContext(object):
"""
A database/binary-unique instance of Lighthouse and its subsystems.
"""
def __init__(self, core, dctx):
disassembler[self] = DisassemblerContextAPI(dctx)
self.core = core
self.dctx = dctx
self._started = False
# the database metadata cache
self.metadata = DatabaseMetadata(self)
# the coverage engine
self.director = CoverageDirector(self.metadata, self.core.palette)
# the coverage painter
self.painter = CoveragePainter(self, self.director, self.core.palette)
# the coverage overview widget
self.coverage_overview = None
# the directory to start the coverage file dialog in
self._last_directory = None
@property
def palette(self):
return self.core.palette
def start(self):
"""
One-time activation a Lighthouse context and its subsystems.
"""
if self._started:
return
self.core.palette.warmup()
self.metadata.start()
self.director.start()
self.painter.start()
# TODO/BINJA remove this ASAP, or find a better workaround... I hate having this here
if disassembler.NAME == "BINJA":
disassembler.hide_dockable("Feature Map")
self._started = True
def terminate(self):
"""
Spin down any session subsystems before the session is deleted.
"""
if not self._started:
return
self.painter.terminate()
self.director.terminate()
self.metadata.terminate()
def select_coverage_files(self):
"""
Prompt a file selection dialog, returning file selections.
NOTE: This saves & reuses the last known directory for subsequent uses.
"""
if not self._last_directory:
self._last_directory = disassembler[self].get_database_directory()
# create & configure a Qt File Dialog for immediate use
file_dialog = QtWidgets.QFileDialog(
None,
'Open code coverage file',
self._last_directory,
'All Files (*.*)'
)
file_dialog.setFileMode(QtWidgets.QFileDialog.ExistingFiles)
# prompt the user with the file dialog, and await filename(s)
filenames, _ = file_dialog.getOpenFileNames()
#
# remember the last directory we were in (parsed from a selected file)
# for the next time the user comes to load coverage files
#
if filenames:
self._last_directory = os.path.dirname(filenames[0]) + os.sep
# log the captured (selected) filenames from the dialog
logger.debug("Captured filenames from file dialog:")
for name in filenames:
logger.debug(" - %s" % name)
# return the captured filenames
return filenames
+847
View File
@@ -0,0 +1,847 @@
import os
import time
import logging
import weakref
import itertools
import collections
from lighthouse.util import *
from lighthouse.util.qt import compute_color_on_gradient
from lighthouse.metadata import DatabaseMetadata
logger = logging.getLogger("Lighthouse.Coverage")
#------------------------------------------------------------------------------
# Coverage Mapping
#------------------------------------------------------------------------------
#
# When raw runtime data (eg, coverage or trace data) is passed into the
# director, it is stored internally in DatabaseCoverage objects. A
# DatabaseCoverage object (as defined below) roughly equates to a single
# loaded coverage file.
#
# Besides holding loaded coverage data, the DatabaseCoverage objects are
# also responsible for mapping the coverage data to the open database using
# the lifted metadata described in metadata.py.
#
# The 'mapping' objects detailed in this file exist only as a thin layer on
# top of the lifted database metadata.
#
# As mapping objects retain the raw runtime data internally, we are
# able to rebuild mappings should the database structure (and its metadata)
# get updated or refreshed by the user.
#
#------------------------------------------------------------------------------
# Database Coverage
#------------------------------------------------------------------------------
class DatabaseCoverage(object):
"""
Database level coverage mapping.
"""
def __init__(self, palette, name="", filepath=None, data=None):
# color palette
self.palette = palette
# the name of the DatabaseCoverage object
self.name = name
# the filepath this coverage data was sourced from
self.filepath = filepath
# the timestamp of the coverage file on disk
try:
self.timestamp = os.path.getmtime(filepath)
except (OSError, TypeError):
self.timestamp = time.time()
#
# this is the coverage mapping's reference to the underlying database
# metadata. it will use this for all its mapping operations.
#
# here we simply populate the DatabaseCoverage object with a stub
# DatabaseMetadata object, but at runtime we will inject a fully
# collected DatabaseMetadata object as maintained by the director.
#
self._metadata = DatabaseMetadata()
#
# the address hitmap is a dictionary that effectively holds the lowest
# level representation of the original coverage data loaded from disk.
#
# as the name implies, the hitmap will track the number of times a
# given address appeared in the original coverage data.
#
# Eg:
# hitmap =
# {
# 0x8040100: 1,
# 0x8040102: 1,
# 0x8040105: 3,
# 0x8040108: 3, # 0x8040108 was executed 3 times...
# 0x804010a: 3,
# 0x804010f: 1,
# ...
# }
#
# the hitmap gives us an interesting degree of flexibility with regard
# to what data sources we can load coverage data from, and how we
# choose to consume it (eg, visualize coverage, heatmaps, ...)
#
# using hitmap.keys(), we effectively have a coverage bitmap of all
# the addresses executed in the coverage log
#
self._hitmap = collections.Counter(data)
self._imagebase = BADADDR
#
# the coverage hash is a simple hash of the coverage mask (hitmap keys)
#
# it is primarily used by the director as a means of quickly comparing
# two database coverage objects against each other, and speculating on
# the output of logical/arithmetic operations of their coverage data.
#
# this hash will need to be recomputed via _update_coverage_hash()
# anytime new coverage data is introduced to this object, or when the
# hitmap is otherwise modified internally.
#
# this is necessary because we cache the coverage hash. computing the
# hash on demand is expensive, and it really shouldn't changne often.
#
# see the usage of 'coverage_hash' in director.py for more info
#
self.coverage_hash = 0
self._update_coverage_hash()
#
# unmapped data is a list of addresses that we have coverage for, but
# could not map to any defined function in the database.
#
# a shortcoming of lighthouse (as recently as v0.8) is that it does
# *not* compute statistics for, or paint, loaded coverage that falls
# outside of defined functions.
#
# under normal circumstances, one can just define a function at the
# area of interest (assuming it was a disassembler issue) and refresh
# the lighthouse metadata to 'map' the missing coverage.
#
# in cases of obfuscation, abnormal control flow, or self modifying
# code, lighthouse will probably not perform well. but to be fair,
# lighthouse was designed for displaying coverage more-so than hit
# tracing or trace exploration.
#
# initially, all loaded coverage data is marked as unmapped
#
self.unmapped_addresses = set(self._hitmap.keys())
#
# at runtime, the map_coverage() member function of this class is
# responsible for taking the unmapped_data mapping it on top of the
# lifted database metadata (self._metadata).
#
# the process of mapping the raw coverage data will yield NodeCoverage
# and FunctionCoverage objects. these are the buckets that the unmapped
# coverage data is poured into during the mappinng process.
#
# NodeCoverage objects represent coverage at the node (basic block)
# level and are owned by a respective FunctionCoverage object.
#
# FunctionCoverage represent coverage at the function level, grouping
# children NodeCoverage objects and providing higher level statistics.
#
# self.nodes: address --> NodeCoverage
# self.functions: address --> FunctionCoverage
#
self.nodes = {}
self.functions = {}
self.instruction_percent = 0.0
# blocks that have not been fully executed (eg, crash / exception)
self.partial_nodes = set()
self.partial_instructions = set()
# addresses that have been executed, but are not in a defined node
self.orphan_addresses = set()
#
# we instantiate a single weakref of ourself (the DatbaseCoverage
# object) such that we can distribute it to the children we create
# without having to repeatedly instantiate new ones.
#
self._weak_self = weakref.proxy(self)
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def data(self):
"""
Return the backing coverage data (a hitmap).
"""
return self._hitmap
@property
def coverage(self):
"""
Return the coverage (address) bitmap/mask.
"""
return viewkeys(self._hitmap)
@property
def suspicious(self):
"""
Return a bool indicating if the coverage seems badly mapped.
"""
bad = 0
total = len(self.nodes)
if not total:
return 0.0
#
# count the number of nodes (basic blocks) that allegedly were executed
# (they have coverage data) but don't actually have their first
# instruction logged as executed.
#
# this is considered 'suspicious' and should be a red flag that the
# provided coverage data is malformed, or for a different binary
#
for adddress, node_coverage in iteritems(self.nodes):
if adddress in node_coverage.executed_instructions:
continue
bad += 1
# compute a percentage of the 'bad nodes'
percent = (bad/float(total))*100
logger.debug("SUSPICIOUS: %5.2f%% (%u/%u)" % (percent, bad, total))
#
# if the percentage of 'bad' coverage nodes is too high, we consider
# this database coverage as 'suspicious' or 'badly mapped'
#
# this number (2%) may need to be tuned. really any non-zero figure
# is strange, but we will give some wiggle room for DBI or
# disassembler fudginess.
#
return percent > 2.0
#--------------------------------------------------------------------------
# Metadata Population
#--------------------------------------------------------------------------
def update_metadata(self, metadata, delta=None):
"""
Install a new databasee metadata object.
"""
self._metadata = weakref.proxy(metadata)
#
# if the underlying database / metadata gets rebased, we will need to
# rebase our coverage data. the 'raw' coverage data stored in the
# hitmap is stored as absolute addresses for performance reasons
#
# here we compute the offset that we will need to rebase the coverage
# data by should a rebase have occurred
#
rebase_offset = self._metadata.imagebase - self._imagebase
#
# if the coverage's imagebase is still BADADDR, that means that this
# coverage object hasn't yet been mapped onto a given metadata cache.
#
# that's fine, we just need to initialize our imagebase which should
# (hopefully!) match the imagebase originally used when baking the
# coverage data into an absolute address form.
#
if self._imagebase == BADADDR:
self._imagebase = self._metadata.imagebase
self._normalize_coverage()
#
# if the imagebase for this coverage exists, then it is susceptible to
# being rebased by a metadata update. if rebase_offset is non-zero,
# this is an indicator that a rebase has occurred.
#
# when a rebase occurs in the metadata, we must also rebase our
# coverage data (stored in the hitmap)
#
elif rebase_offset:
self._hitmap = { (address + rebase_offset): hits for address, hits in iteritems(self._hitmap) }
self._imagebase = self._metadata.imagebase
#
# since the metadata has been updated in one form or another, we need
# to trash our existing coverage mapping, and rebuild it from the data.
#
self.unmap_all()
def refresh(self):
"""
Refresh the mapping of our coverage data to the database metadata.
"""
# rebuild our coverage mapping
dirty_nodes, dirty_functions = self._map_coverage()
# bake our coverage map
self._finalize(dirty_nodes, dirty_functions)
# update the coverage hash incase the hitmap changed
self._update_coverage_hash()
def refresh_theme(self):
"""
Refresh UI facing elements to reflect the current theme.
Does not require @disassembler.execute_ui decorator as no Qt is touched.
"""
for function in self.functions.values():
function.coverage_color = compute_color_on_gradient(
function.instruction_percent,
self.palette.table_coverage_bad,
self.palette.table_coverage_good
)
def _finalize(self, dirty_nodes, dirty_functions):
"""
Finalize the DatabaseCoverage statistics / data for use.
"""
self._finalize_nodes(dirty_nodes)
self._finalize_functions(dirty_functions)
self._finalize_instruction_percent()
def _finalize_nodes(self, dirty_nodes):
"""
Finalize the NodeCoverage objects statistics / data for use.
"""
metadata = self._metadata
for address, node_coverage in iteritems(dirty_nodes):
node_coverage.finalize()
# save off a reference to partially executed nodes
if node_coverage.instructions_executed != metadata.nodes[address].instruction_count:
self.partial_nodes.add(address)
else:
self.partial_nodes.discard(address)
# finalize the set of instructions executed in partially executed nodes
instructions = []
for node_address in self.partial_nodes:
instructions.append(self.nodes[node_address].executed_instructions)
self.partial_instructions = set(itertools.chain.from_iterable(instructions))
def _finalize_functions(self, dirty_functions):
"""
Finalize the FunctionCoverage objects statistics / data for use.
"""
for function_coverage in itervalues(dirty_functions):
function_coverage.finalize()
def _finalize_instruction_percent(self):
"""
Finalize the DatabaseCoverage's coverage % by instructions executed.
"""
# sum all the instructions in the database metadata
total = sum(f.instruction_count for f in itervalues(self._metadata.functions))
if not total:
self.instruction_percent = 0.0
return
# sum the unique instructions executed across all functions
executed = sum(f.instructions_executed for f in itervalues(self.functions))
# save the computed percentage of database instructions executed (0 to 1.0)
self.instruction_percent = float(executed) / total
#--------------------------------------------------------------------------
# Data Operations
#--------------------------------------------------------------------------
def add_data(self, data, update=True):
"""
Add an existing instruction hitmap to the coverage mapping.
"""
# add the given runtime data to our data source
for address, hit_count in iteritems(data):
self._hitmap[address] += hit_count
# do not update other internal structures if requested
if not update:
return
# update the coverage hash in case the hitmap changed
self._update_coverage_hash()
# mark these touched addresses as dirty
self.unmapped_addresses |= viewkeys(data)
def add_addresses(self, addresses, update=True):
"""
Add a list of instruction addresses to the coverage mapping.
"""
# increment the hit count for an address
for address in addresses:
self._hitmap[address] += 1
# do not update other internal structures if requested
if not update:
return
# update the coverage hash in case the hitmap changed
self._update_coverage_hash()
# mark these touched addresses as dirty
self.unmapped_addresses |= set(addresses)
def subtract_data(self, data):
"""
Subtract an existing instruction hitmap from the coverage mapping.
"""
# subtract the given hitmap from our existing hitmap
for address, hit_count in iteritems(data):
self._hitmap[address] -= hit_count
#
# if there is no longer any hits for this address, delete its
# entry from the hitmap dictionary. we don't want its entry to
# hang around because we use self._hitmap.viewkeys() as a
# coverage bitmap/mask
#
if not self._hitmap[address]:
del self._hitmap[address]
# update the coverage hash as the hitmap has probably changed
self._update_coverage_hash()
#
# unmap everything because a complete re-mapping is easier with the
# current implementation of things
#
self.unmap_all()
def mask_data(self, coverage_mask):
"""
Mask the hitmap data against a given coverage mask.
Returns a new DatabaseCoverage containing the masked hitmap.
"""
composite_data = collections.defaultdict(int)
# preserve only hitmap data that matches the coverage mask
for address in coverage_mask:
composite_data[address] = self._hitmap[address]
# done, return a new DatabaseCoverage masked with the given coverage
return DatabaseCoverage(self.palette, data=composite_data)
def _update_coverage_hash(self):
"""
Update the hash of the coverage mask.
"""
if self._hitmap:
self.coverage_hash = hash(frozenset(viewkeys(self._hitmap)))
else:
self.coverage_hash = 0
#--------------------------------------------------------------------------
# Coverage Mapping
#--------------------------------------------------------------------------
def _normalize_coverage(self):
"""
Normalize basic block coverage into instruction coverage.
TODO: It would be interesting if we could do away with this entirely,
working off the original instruction/bb coverage data (hitmap) instead.
"""
coverage_addresses = viewkeys(self._hitmap)
if not coverage_addresses:
return
# bucketize the exploded coverage addresses
instructions = coverage_addresses & self._metadata.instructions
basic_blocks = instructions & viewkeys(self._metadata.nodes)
#
# here we attempt to compute the ratio between basic block addresses,
# and instruction addresses in the incoming coverage data.
#
# this will help us determine if the existing instruction data is
# sufficient, or whether we need to explode/flatten the basic block
# addresses into their respective child instructions
#
block_ratio = len(basic_blocks) / float(len(instructions))
block_trace_confidence = 0.80
logger.debug("Block confidence %f" % block_ratio)
#
# a low basic block to instruction ratio implies the data is probably
# from an instruction trace, or a drcov trace that was exploded from
# (bb_address, size) into its respective addresses
#
if block_ratio < block_trace_confidence:
return
#
# take each basic block address, and explode it into a list of all the
# instruction addresses contained within the basic block as determined
# by the database metadata cache
#
# it is *possible* that this may introduce 'inaccurate' paint should
# the user provide a basic block trace that crashes mid-block. but
# that is not something we can account for in a block trace...
#
for bb_address in basic_blocks:
bb_hits = self._hitmap[bb_address]
for inst_address in self._metadata.nodes[bb_address].instructions:
self._hitmap[inst_address] = bb_hits
logger.debug("Converted basic block trace to instruction trace...")
def _map_coverage(self):
"""
Map loaded coverage data to the underlying database metadata.
"""
dirty_nodes = self._map_nodes()
dirty_functions = self._map_functions(dirty_nodes)
return (dirty_nodes, dirty_functions)
def _map_nodes(self):
"""
Map loaded coverage data to database defined nodes (basic blocks).
"""
db_metadata = self._metadata
dirty_nodes = {}
# the coverage data we will attempt to process in this function
coverage_addresses = sorted(self.unmapped_addresses)
#
# the loop below is the core of our coverage mapping process.
#
# operating on whatever coverage data (instruction addresses) reside
# within unmapped_data, this loop will attempt to bucket the coverage
# into NodeCoverage objects where possible.
#
# the higher level coverage mappings (eg FunctionCoverage,
# DatabaseCoverage) get built on top of the node mapping that we
# perform here.
#
# since this loop is the most computationally expensive part of the
# mapping process, it has been carefully profiled & optimized for
# speed. please be careful if you wish to modify it...
#
i, num_addresses = 0, len(coverage_addresses)
while i < num_addresses:
# get the next coverage address to map
address = coverage_addresses[i]
# get the node (basic block) metadata that this address falls in
node_metadata = db_metadata.get_node(address)
#
# should we fail to locate node metadata for the coverage address
# that we are trying to map, then the address must not fall inside
# of a defined function
#
if not node_metadata:
self.orphan_addresses.add(address)
if address in db_metadata.instructions:
self.unmapped_addresses.discard(address)
i += 1
continue
#
# we found applicable node metadata for this address, now we will
# try to find an existing bucket (NodeCoverage) for the address
#
if node_metadata.address in self.nodes:
node_coverage = self.nodes[node_metadata.address]
#
# failed to locate an existing NodeCoverage object for this
# address, it looks like this is the first time we have attempted
# to bucket coverage for this node.
#
# create a new NodeCoverage bucket and use it now
#
else:
node_coverage = NodeCoverage(node_metadata.address, self._weak_self)
self.nodes[node_metadata.address] = node_coverage
# alias for speed, prior to looping
node_start = node_metadata.address
node_end = node_start + node_metadata.size
#
# the loop below is as an inlined fast-path that assumes the next
# several coverage addresses will likely belong to the same node
# that we just looked up (or created) in the code above
#
# we can simply re-use the current node and its coverage object
# until the next address to be processed falls outside the node
#
while 1:
#
# map the hitmap data for the current address if it falls on
# an actual instruction start within the node
#
# if the address falls within an instruction, it will just be
# 'ignored', remaining in the 'unmapped' / invisible data
#
if address in node_metadata.instructions:
node_coverage.executed_instructions[address] = self._hitmap[address]
self.unmapped_addresses.discard(address)
# get the next address to attempt mapping on
try:
i += 1
address = coverage_addresses[i]
# an IndexError implies there is nothing left to map...
except IndexError:
break
#
# if the next address is not in this node, it's time break out
# of this loop and send it through the full node lookup path
#
if not (node_start <= address < node_end):
break
# the node was updated, so save its coverage as dirty
dirty_nodes[node_metadata.address] = node_coverage
# done, return a map of NodeCoverage objects that were modified
return dirty_nodes
def _map_functions(self, dirty_nodes):
"""
Map loaded coverage data to database defined functions.
"""
dirty_functions = {}
#
# thanks to the map_nodes(), we now have a repository of NodeCoverage
# objects that are considered 'dirty' and can be used precisely to
# build or update the function level coverage metadata
#
for node_coverage in itervalues(dirty_nodes):
#
# using a given NodeCoverage object, we retrieve its underlying
# metadata so that we can perform a reverse lookup of its function
# (parent) metadata.
#
functions = self._metadata.get_functions_by_node(node_coverage.address)
#
# now we will attempt to retrieve the FunctionCoverage objects
# that we need to parent the given NodeCoverage object to
#
for function_metadata in functions:
function_coverage = self.functions.get(function_metadata.address, None)
#
# if we failed to locate the FunctionCoverage for a function
# that references this node, then it is the first time we have
# seen coverage for it.
#
# create a new coverage function object and use it now.
#
if not function_coverage:
function_coverage = FunctionCoverage(function_metadata.address, self._weak_self)
self.functions[function_metadata.address] = function_coverage
# add the NodeCoverage object to its parent FunctionCoverage
function_coverage.mark_node(node_coverage)
dirty_functions[function_metadata.address] = function_coverage
# done, return a map of FunctionCoverage objects that were modified
return dirty_functions
def unmap_all(self):
"""
Unmap all mapped coverage data.
"""
# clear out the processed / computed coverage data structures
self.nodes = {}
self.functions = {}
self.partial_nodes = set()
self.partial_instructions = set()
self.orphan_addresses = set()
# dump the source coverage data back into an 'unmapped' state
self.unmapped_addresses = set(self._hitmap.keys())
#------------------------------------------------------------------------------
# Function Coverage
#------------------------------------------------------------------------------
class FunctionCoverage(object):
"""
Function level coverage mapping.
"""
def __init__(self, function_address, database=None):
self.database = database
self.address = function_address
# addresses of nodes executed
self.nodes = {}
# compute the # of instructions executed by this function's coverage
self.instruction_percent = 0.0
self.node_percent = 0.0
# baked colors
self.coverage_color = 0
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def hits(self):
"""
Return the number of instruction executions in this function.
"""
return sum(x.hits for x in itervalues(self.nodes))
@property
def nodes_executed(self):
"""
Return the number of unique nodes executed in this function.
"""
return len(self.nodes)
@property
def instructions_executed(self):
"""
Return the number of unique instructions executed in this function.
"""
return sum(x.instructions_executed for x in itervalues(self.nodes))
@property
def instructions(self):
"""
Return the executed instruction addresses in this function.
"""
return set([ea for node in itervalues(self.nodes) for ea in node.executed_instructions.keys()])
#--------------------------------------------------------------------------
# Controls
#--------------------------------------------------------------------------
def mark_node(self, node_coverage):
"""
Save the given NodeCoverage to this function.
"""
self.nodes[node_coverage.address] = node_coverage
def finalize(self):
"""
Finalize the FunctionCoverage data for use.
"""
function_metadata = self.database._metadata.functions[self.address]
# compute the % of nodes executed
self.node_percent = float(self.nodes_executed) / function_metadata.node_count
# compute the % of instructions executed
self.instruction_percent = \
float(self.instructions_executed) / function_metadata.instruction_count
# the sum of node executions in this function
node_sum = sum(x.executions for x in itervalues(self.nodes))
# the estimated number of executions this function has experienced
self.executions = float(node_sum) / function_metadata.node_count
# bake colors
self.coverage_color = compute_color_on_gradient(
self.instruction_percent,
self.database.palette.table_coverage_bad,
self.database.palette.table_coverage_good
)
#------------------------------------------------------------------------------
# Node Coverage
#------------------------------------------------------------------------------
class NodeCoverage(object):
"""
Node (basic block) level coverage mapping.
"""
def __init__(self, node_address, database=None):
self.database = database
self.address = node_address
self.executed_instructions = {}
self.instructions_executed = 0
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def hits(self):
"""
Return the number of instruction executions in this node.
"""
return sum(itervalues(self.executed_instructions))
#--------------------------------------------------------------------------
# Controls
#--------------------------------------------------------------------------
def finalize(self):
"""
Finalize the coverage metrics for faster access.
"""
node_metadata = self.database._metadata.nodes[self.address]
# the estimated number of executions this node has experienced.
self.executions = float(self.hits) / node_metadata.instruction_count
# the number of unique instructions executed
self.instructions_executed = len(self.executed_instructions)
File diff suppressed because it is too large Load Diff
+146
View File
@@ -0,0 +1,146 @@
from lighthouse.util.log import lmsg
from lighthouse.util.misc import iteritems
from lighthouse.util.disassembler import disassembler
#------------------------------------------------------------------------------
# Exception Definitions
#------------------------------------------------------------------------------
class LighthouseError(Exception):
"""
An error generated by Lighthouse.
"""
def __init__(self, *args, **kwargs):
super(LighthouseError, self).__init__(*args, **kwargs)
#------------------------------------------------------------------------------
# Coverage File Exceptions
#------------------------------------------------------------------------------
class CoverageException(LighthouseError):
"""
A class of errors pertaining to loading & mapping coverage files.
"""
name = NotImplementedError
description = NotImplementedError
def __init__(self, message, filepath):
super(CoverageException, self).__init__(message)
self.filepath = filepath
@property
def verbose(self):
return "Error: %s\n\n%s" % (self.name, self.description)
def __str__(self):
return self.message + " '%s'" % self.filepath
class CoverageParsingError(CoverageException):
"""
An error generated by the CoverageReader when all parsers fail.
"""
name = "PARSE_FAILURE"
description = \
"Failed to parse one or more of the selected coverage files!\n\n" \
" Possible reasons:\n" \
" - You selected a file that was *not* a coverage file.\n" \
" - The selected coverage file is malformed or unreadable.\n" \
" - A suitable parser for the coverage file is not installed.\n\n" \
"Please see the disassembler console for more info..."
def __init__(self, filepath, tracebacks):
super(CoverageParsingError, self).__init__("Failed to parse coverage file", filepath)
self.tracebacks = tracebacks
class CoverageMissingError(CoverageException):
"""
An error generated when no data was extracted from a CoverageFile.
"""
name = "NO_COVERAGE_ERROR"
description = \
"No usable coverage data was extracted from one of the selected files.\n\n" \
" Possible reasons:\n" \
" - You selected a coverage file for the wrong binary.\n" \
" - The name of the executable file used to generate this database\n" \
" is different than the one you collected coverage against.\n" \
" - Your DBI failed to collect any coverage for this binary.\n\n" \
"Please see the disassembler console for more info..."
def __init__(self, filepath):
super(CoverageMissingError, self).__init__("No coverage extracted from file", filepath)
class CoverageMappingAbsent(CoverageException):
"""
A warning generated when coverage data cannot be mapped.
"""
name = "NO_COVERAGE_MAPPED"
description = \
"One or more of the loaded coverage files has no visibly mapped data.\n\n" \
" Possible reasons:\n" \
" - The loaded coverage data does not fall within defined functions.\n" \
" - You loaded an absolute address trace with a different imagebase.\n" \
" - The coverage data might be corrupt or malformed.\n\n" \
"Please see the disassembler console for more info..."
def __init__(self, coverage):
super(CoverageMappingAbsent, self).__init__("No coverage data could be mapped", coverage.filepath)
self.coverage = coverage
class CoverageMappingSuspicious(CoverageException):
"""
A warning generated when coverage data does not appear to match the database.
"""
name = "BAD_COVERAGE_MAPPING"
description = \
"One or more of the loaded coverage files appears to be badly mapped.\n\n" \
" Possible reasons:\n" \
" - You selected the wrong binary/module to load coverage from.\n" \
" - Your coverage file/data is for a different version of the\n" \
" binary that does not match what the disassembler has open.\n" \
" - You recorded self-modifying code or something with very\n" \
" abnormal control flow (obfuscated code, malware, packers).\n" \
" - The coverage data might be corrupt or malformed.\n\n" \
"This means that any coverage displayed by Lighthouse is PROBABLY\n" \
"WRONG and is not be trusted because the coverage data does not\n" \
"appear to match the disassembled binary."
def __init__(self, coverage):
super(CoverageMappingSuspicious, self).__init__("Coverage data appears badly mapped", coverage.filepath)
self.coverage = coverage
#------------------------------------------------------------------------------
# UI Warnings
#------------------------------------------------------------------------------
def warn_errors(errors, ignore=[]):
"""
Warn the user of any encountered errors with a messagebox.
"""
if not errors:
return
for error_type, error_list in iteritems(errors):
#
# loop through the individual instances/files that caused this error
# and dump the results to the disassembler console...
#
lmsg("-"*50)
lmsg("Files reporting %s:" % error_type.name)
for error in error_list:
lmsg(" - %s" % error.filepath)
# suppress popups for certain errors, if the user has specified such
if error_type in ignore:
continue
#
# popup a more verbose error messagebox for the user to read regarding
# this class of error they encountered
#
disassembler.warning(error.verbose)
# done ...
lmsg("-"*50)
@@ -0,0 +1,263 @@
import ctypes
import logging
from binaryninja import PluginCommand
from binaryninjaui import UIAction, UIActionHandler, Menu
from lighthouse.context import LighthouseContext
from lighthouse.integration.core import LighthouseCore
from lighthouse.util.disassembler import disassembler
logger = logging.getLogger("Lighthouse.Binja.Integration")
#------------------------------------------------------------------------------
# Lighthouse Binja Integration
#------------------------------------------------------------------------------
class LighthouseBinja(LighthouseCore):
"""
Lighthouse UI Integration for Binary Ninja.
"""
def __init__(self):
super(LighthouseBinja, self).__init__()
def get_context(self, dctx, startup=True):
"""
Get the LighthouseContext object for a given database context.
In Binary Ninja, a dctx is a BinaryView (BV).
"""
dctx_id = ctypes.addressof(dctx.handle.contents)
#
# create a new LighthouseContext if this is the first time a context
# has been requested for this BNDB / bv
#
if dctx_id not in self.lighthouse_contexts:
# create a new 'context' representing this BNDB / bv
lctx = LighthouseContext(self, dctx)
if startup:
lctx.start()
# save the created ctx for future calls
self.lighthouse_contexts[dctx_id] = lctx
#
# for binja, we basically *never* want to start the lighthouse ctx
# when it is first created. this is because binja will *immediately*
# create a coverage overview widget for every database when it is
# first opened.
#
# this is annoying, because we don't want to actually start up all
# of the lighthouse threads and subsystems unless the user actually
# starts trying to use lighthouse for their session.
#
# so we initialize the lighthouse context (with start()) on the
# second context request which will go through the else block
# below... any subsequent call to start() is effectively a nop!
#
else:
lctx = self.lighthouse_contexts[dctx_id]
lctx.start()
# return the lighthouse context object for this database ctx / bv
return lctx
def binja_close_context(self, dctx):
"""
Attempt to close / spin-down the LighthouseContext for the given dctx.
In Binary Ninja, a dctx is a BinaryView (BV).
"""
dctx_id = ctypes.addressof(dctx.handle.contents)
# fetch the LighthouseContext for the closing BNDB
try:
lctx = self.lighthouse_contexts.pop(dctx_id)
#
# if lighthouse was not actually used for this BNDB / session, then
# the lookup will fail as there is nothing to spindown
#
except KeyError:
return
# spin down the closing context (stop threads, cleanup qt state, etc)
logger.info("Closing a LighthouseContext...")
lctx.terminate()
#--------------------------------------------------------------------------
# UI Integration (Internal)
#--------------------------------------------------------------------------
#
# TODO / HACK / XXX / V35 / 2021: Some of Binja's UI elements (such as the
# terminal) do not get assigned a BV, even if there is only one open.
#
# this is problematic, because if the user 'clicks' onto the terminal, and
# then tries to execute our UIActions (like 'Load Coverage File'), the
# given 'context.binaryView' will be None
#
# in the meantime, we have to use this workaround that will try to grab
# the 'current' bv from the dock. this is not ideal, but it will suffice.
#
# -----------------
#
# XXX: It's now 2024, Binja's UI / API stack has grown a lot. it's more
# powerful and a bunch of the oddities / hacks lighthouse employed for
# binja may no longer apply. this whole file should probably be revisited
# and re-factored at some point point.. sorry if it's hard to follow
#
def _interactive_load_file(self, context):
dctx = disassembler.binja_get_bv_from_dock()
if not dctx:
disassembler.warning("Lighthouse requires an open BNDB to load coverage.")
return
super(LighthouseBinja, self).interactive_load_file(dctx)
def _interactive_load_batch(self, context):
dctx = disassembler.binja_get_bv_from_dock()
if not dctx:
disassembler.warning("Lighthouse requires an open BNDB to load coverage.")
return
super(LighthouseBinja, self).interactive_load_batch(dctx)
def _open_coverage_xref(self, context):
super(LighthouseBinja, self).open_coverage_xref(context.address, context.binaryView)
def _interactive_coverage_xref(self, context):
if context is None:
return
#
# this is a special case where we check if the ctx exists rather than
# blindly creating a new one. again, this is because binja may call
# this function at random times to decide whether it should display the
# XREF menu option.
#
# but asking whether or not the xref menu option should be shown is not
# a good indication of 'is the user actually using lighthouse' so we
# do not want this to be one that creates lighthouse contexts
#
dctx = context.binaryView
if not dctx:
return
dctx_id = ctypes.addressof(dctx.handle.contents)
lctx = self.lighthouse_contexts.get(dctx_id, None)
if not lctx:
return
#
# is there even any coverage loaded into lighthouse? if not, the user
# probably isn't even using it. so don't bother showing the xref action
#
if not lctx.director.coverage_names:
return
if context.view is None:
return
view = context.view
context_menu = view.contextMenu()
#
# Create a new, temporary Coverage Xref action to inject into the
# right click context menu that is being shown...
#
action = "Coverage Xref"
UIAction.registerAction(action)
action_handler = view.actionHandler()
action_handler.bindAction(action, UIAction(self._open_coverage_xref))
context_menu.addAction(action, "Plugins")
def _is_xref_valid(self, dctx, addr):
#
# this is a special case where we check if the ctx exists rather than
# blindly creating a new one. again, this is because binja may call
# this function at random times to decide whether it should display the
# XREF menu option.
#
# but asking whether or not the xref menu option should be shown is not
# a good indidication of 'is the user actually using lighthouse' so we
# do not want this to be one that creates lighthouse contexts
#
dctx_id = ctypes.addressof(dctx.handle.contents)
lctx = self.lighthouse_contexts.get(dctx_id, None)
if not lctx:
return False
# return True if there appears to be coverage loaded...
return bool(lctx.director.coverage_names)
def _open_coverage_overview(self, context):
dctx = disassembler.binja_get_bv_from_dock()
if not dctx:
disassembler.warning("Lighthouse requires an open BNDB to open the overview.")
return
super(LighthouseBinja, self).open_coverage_overview(dctx)
def _stub(self, context):
# XXX: This was added as a last minute bodge prior to releasing v0.9.3,
# it fixes a crash-on-close that was manifesting on binja macOS, when
# using a lambda instead of a concrete function/stub like this.
return None
#--------------------------------------------------------------------------
# Binja Actions
#--------------------------------------------------------------------------
ACTION_LOAD_FILE = "Lighthouse\\Load code coverage file..."
ACTION_LOAD_BATCH = "Lighthouse\\Load code coverage batch..."
ACTION_COVERAGE_XREF = "Lighthouse\\Coverage Xref"
ACTION_COVERAGE_OVERVIEW = "Lighthouse\\Open Coverage Overview"
def _install_load_file(self):
action = self.ACTION_LOAD_FILE
UIAction.registerAction(action)
UIActionHandler.globalActions().bindAction(action, UIAction(self._interactive_load_file))
Menu.mainMenu("Plugins").addAction(action, "Loading", 0)
logger.info("Installed the 'Code coverage file' menu entry")
def _install_load_batch(self):
action = self.ACTION_LOAD_BATCH
UIAction.registerAction(action)
UIActionHandler.globalActions().bindAction(action, UIAction(self._interactive_load_batch))
Menu.mainMenu("Plugins").addAction(action, "Loading", 1)
logger.info("Installed the 'Code coverage batch' menu entry")
def _install_open_coverage_xref(self):
action = self.ACTION_COVERAGE_XREF
UIAction.registerAction(action)
UIActionHandler.globalActions().bindAction(action, UIAction(self._stub, self._interactive_coverage_xref))
Menu.mainMenu("Plugins").addAction(action, "Loading", 2)
# NOTE/V35: Binja automatically creates View --> Show Coverage Overview
def _install_open_coverage_overview(self):
action = self.ACTION_COVERAGE_OVERVIEW
UIAction.registerAction(action)
UIActionHandler.globalActions().bindAction(action, UIAction(self._open_coverage_overview))
Menu.mainMenu("Plugins").addAction(action, "Windows", 0)
logger.info("Installed the 'Open Coverage Overview' menu entry")
# NOTE/V35: Binja doesn't really 'unload' plugins, so whatever...
def _uninstall_load_file(self):
pass
def _uninstall_load_batch(self):
pass
def _uninstall_open_coverage_xref(self):
pass
def _uninstall_open_coverage_overview(self):
pass
@@ -0,0 +1,32 @@
import logging
from lighthouse.util.log import lmsg
from lighthouse.integration.binja_integration import LighthouseBinja
logger = logging.getLogger("Lighthouse.Binja.Loader")
#------------------------------------------------------------------------------
# Lighthouse Binja Loader
#------------------------------------------------------------------------------
#
# The Binary Ninja plugin loading process is less involved compared to IDA.
#
# When Binary Ninja is starting up, it will import all python files placed
# in its root plugin folder. It will then attempt to import any *directory*
# in the plugin folder as a python module.
#
# For this reason, you may see Binary Ninja attempting to load 'lighthouse'
# and 'lighthouse_plugin' in your console. This is normal due to the way
# we have structured Lighthouse and its loading process.
#
# In practice, lighthouse_plugin.py will import the contents of this file,
# when Binary Ninja is starting up. As such, this is our only opportunity
# to load & integrate Lighthouse.
#
try:
lighthouse = LighthouseBinja()
lighthouse.load()
except Exception as e:
lmsg("Failed to initialize Lighthouse")
logger.exception("Exception details:")
+412
View File
@@ -0,0 +1,412 @@
import abc
import logging
import lighthouse
from lighthouse.util import lmsg
from lighthouse.util.qt import *
from lighthouse.util.update import check_for_update
from lighthouse.util.disassembler import disassembler
from lighthouse.ui import *
from lighthouse.metadata import metadata_progress
from lighthouse.exceptions import *
logger = logging.getLogger("Lighthouse.Core")
#------------------------------------------------------------------------------
# Lighthouse Plugin Core
#------------------------------------------------------------------------------
class LighthouseCore(object):
__metaclass__ = abc.ABCMeta
#--------------------------------------------------------------------------
# Plugin Metadata
#--------------------------------------------------------------------------
PLUGIN_VERSION = "0.9.4-DEV"
AUTHORS = "Markus Gaasedelen"
DATE = "2024"
#--------------------------------------------------------------------------
# Initialization
#--------------------------------------------------------------------------
def load(self):
"""
Load the plugin, and integrate its UI into the disassembler.
"""
self._update_checked = False
self.lighthouse_contexts = {}
# the plugin color palette
self.palette = LighthousePalette()
self.palette.theme_changed(self.refresh_theme)
def create_coverage_overview(name, parent, dctx):
lctx = self.get_context(dctx, startup=False)
widget = disassembler.create_dockable_widget(parent, name)
overview = CoverageOverview(lctx, widget)
return widget
# the coverage overview widget
disassembler.register_dockable("Coverage Overview", create_coverage_overview)
# install disassembler UI
self._install_ui()
# install entry point for headless / terminal access...
lighthouse.get_context = self.get_context
# plugin loaded successfully, print the plugin banner
self.print_banner()
logger.info("Successfully loaded plugin")
def unload(self):
"""
Unload the plugin, and remove any UI integrations.
"""
self._uninstall_ui()
# remove headless entry point
lighthouse.get_context = lambda x: None
# spin down any active contexts (stop threads, cleanup qt state, etc)
for lctx in self.lighthouse_contexts.values():
lctx.terminate()
logger.info("-"*75)
logger.info("Plugin terminated")
def print_banner(self):
"""
Print the plugin banner.
"""
# build the main banner title
banner_params = (self.PLUGIN_VERSION, self.AUTHORS, self.DATE)
banner_title = "v%s - (c) %s - %s" % banner_params
# print plugin banner
lmsg("Loaded %s" % banner_title)
#--------------------------------------------------------------------------
# Disassembler / Database Context Selector
#--------------------------------------------------------------------------
@abc.abstractmethod
def get_context(self, dctx, startup=True):
"""
Get the LighthouseContext object for a given database context.
"""
pass
#--------------------------------------------------------------------------
# UI Integration (Internal)
#--------------------------------------------------------------------------
def _install_ui(self):
"""
Initialize & integrate all plugin UI elements.
"""
self._install_load_file()
self._install_load_batch()
self._install_open_coverage_xref()
self._install_open_coverage_overview()
def _uninstall_ui(self):
"""
Cleanup & remove all plugin UI integrations.
"""
self._uninstall_open_coverage_overview()
self._uninstall_open_coverage_xref()
self._uninstall_load_batch()
self._uninstall_load_file()
@abc.abstractmethod
def _install_load_file(self):
"""
Install the 'File->Load->Code coverage file...' menu entry.
"""
pass
@abc.abstractmethod
def _install_load_batch(self):
"""
Install the 'File->Load->Code coverage batch...' menu entry.
"""
pass
@abc.abstractmethod
def _install_open_coverage_xref(self):
"""
Install the right click 'Coverage Xref' context menu entry.
"""
pass
@abc.abstractmethod
def _install_open_coverage_overview(self):
"""
Install the 'View->Open subviews->Coverage Overview' menu entry.
"""
pass
@abc.abstractmethod
def _uninstall_load_file(self):
"""
Remove the 'File->Load file->Code coverage file...' menu entry.
"""
pass
@abc.abstractmethod
def _uninstall_load_batch(self):
"""
Remove the 'File->Load file->Code coverage batch...' menu entry.
"""
pass
@abc.abstractmethod
def _uninstall_open_coverage_xref(self):
"""
Remove the right click 'Coverage Xref' context menu entry.
"""
pass
@abc.abstractmethod
def _uninstall_open_coverage_overview(self):
"""
Remove the 'View->Open subviews->Coverage Overview' menu entry.
"""
pass
#--------------------------------------------------------------------------
# UI Actions (Public)
#--------------------------------------------------------------------------
def refresh_theme(self):
"""
Refresh UI facing elements to reflect the current theme.
"""
for lctx in self.lighthouse_contexts.values():
lctx.director.refresh_theme()
if lctx.coverage_overview:
lctx.coverage_overview.refresh_theme()
lctx.painter.force_repaint()
def open_coverage_overview(self, dctx=None):
"""
Open the dockable 'Coverage Overview' dialog.
"""
lctx = self.get_context(dctx)
# the coverage overview is already open & visible, nothing to do
if lctx.coverage_overview and lctx.coverage_overview.visible:
return
# show the coverage overview
disassembler.show_dockable("Coverage Overview")
# trigger an update check (this should only ever really 'check' once)
self.check_for_update()
def open_coverage_xref(self, address, dctx=None):
"""
Open the 'Coverage Xref' dialog for a given address.
"""
lctx = self.get_context(dctx)
# show the coverage xref dialog
dialog = CoverageXref(lctx.director, address)
if not dialog.exec_():
return
# activate the user selected xref (if one was double clicked)
if dialog.selected_coverage:
lctx.director.select_coverage(dialog.selected_coverage)
return
# load a coverage file from disk
disassembler.show_wait_box("Loading coverage from disk...")
created_coverage, errors = lctx.director.load_coverage_files(
[dialog.selected_filepath],
disassembler.replace_wait_box
)
if not created_coverage:
lmsg("No coverage files could be loaded...")
disassembler.hide_wait_box()
warn_errors(errors)
return
disassembler.replace_wait_box("Selecting coverage...")
lctx.director.select_coverage(created_coverage[0].name)
disassembler.hide_wait_box()
def interactive_load_batch(self, dctx=None):
"""
Perform the user-interactive loading of a coverage batch.
"""
lctx = self.get_context(dctx)
#
# kick off an asynchronous metadata refresh. this will run in the
# background while the user is selecting which coverage files to load
#
future = lctx.metadata.refresh_async(progress_callback=metadata_progress)
#
# we will now prompt the user with an interactive file dialog so they
# can select the coverage files they would like to load from disk
#
filepaths = lctx.select_coverage_files()
if not filepaths:
lctx.director.metadata.abort_refresh()
return
# prompt the user to name the new coverage aggregate
default_name = "BATCH_%s" % lctx.director.peek_shorthand()
ok, batch_name = prompt_string(
"Batch Name:",
"Please enter a name for this coverage",
default_name
)
#
# if user didn't enter a name for the batch (or hit cancel) we should
# abort the loading process...
#
if not (ok and batch_name):
lmsg("User failed to enter a name for the batch coverage...")
lctx.director.metadata.abort_refresh()
return
#
# to begin mapping the loaded coverage data, we require that the
# asynchronous database metadata refresh has completed. if it is
# not done yet, we will block here until it completes.
#
# a progress dialog depicts the work remaining in the refresh
#
disassembler.show_wait_box("Building database metadata...")
lctx.metadata.go_synchronous()
await_future(future)
#
# now that the database metadata is available, we can use the director
# to normalize and condense (aggregate) all the coverage data
#
disassembler.replace_wait_box("Loading coverage from disk...")
batch_coverage, errors = lctx.director.load_coverage_batch(
filepaths,
batch_name,
disassembler.replace_wait_box
)
# if batch creation fails...
if not batch_coverage:
lmsg("Creation of batch '%s' failed..." % batch_name)
disassembler.hide_wait_box()
warn_errors(errors)
return
# select the newly created batch coverage
disassembler.replace_wait_box("Selecting coverage...")
lctx.director.select_coverage(batch_name)
# all done! pop the coverage overview to show the user their results
disassembler.hide_wait_box()
lmsg("Successfully loaded batch %s..." % batch_name)
self.open_coverage_overview(lctx.dctx)
# finally, emit any notable issues that occurred during load
warn_errors(errors, lctx.director.suppressed_errors)
def interactive_load_file(self, dctx=None):
"""
Perform the user-interactive loading of individual coverage files.
"""
lctx = self.get_context(dctx)
#
# kick off an asynchronous metadata refresh. this will run in the
# background while the user is selecting which coverage files to load
#
future = lctx.metadata.refresh_async(progress_callback=metadata_progress)
#
# we will now prompt the user with an interactive file dialog so they
# can select the coverage files they would like to load from disk
#
filenames = lctx.select_coverage_files()
if not filenames:
lctx.metadata.abort_refresh()
return
#
# to begin mapping the loaded coverage data, we require that the
# asynchronous database metadata refresh has completed. if it is
# not done yet, we will block here until it completes.
#
# a progress dialog depicts the work remaining in the refresh
#
disassembler.show_wait_box("Building database metadata...")
lctx.metadata.go_synchronous()
await_future(future)
#
# now that the database metadata is available, we can use the director
# to load and normalize the selected coverage files
#
disassembler.replace_wait_box("Loading coverage from disk...")
created_coverage, errors = lctx.director.load_coverage_files(filenames, disassembler.replace_wait_box)
#
# if the director failed to map any coverage, the user probably
# provided bad files. emit any warnings and bail...
#
if not created_coverage:
lmsg("No coverage files could be loaded...")
disassembler.hide_wait_box()
warn_errors(errors)
return
#
# activate the first of the newly loaded coverage file(s). this is the
# one that will be visible in the coverage overview once opened
#
disassembler.replace_wait_box("Selecting coverage...")
lctx.director.select_coverage(created_coverage[0].name)
# all done! pop the coverage overview to show the user their results
disassembler.hide_wait_box()
lmsg("Successfully loaded %u coverage file(s)..." % len(created_coverage))
self.open_coverage_overview(lctx.dctx)
# finally, emit any notable issues that occurred during load
warn_errors(errors, lctx.director.suppressed_errors)
def check_for_update(self):
"""
Check if there is an update available for Lighthouse.
"""
if self._update_checked:
return
# wrap the callback (a popup) to ensure it gets called from the UI
callback = disassembler.execute_ui(disassembler.warning)
# kick off the async update check
check_for_update(self.PLUGIN_VERSION, callback)
self._update_checked = True
@@ -0,0 +1,385 @@
import os
import logging
import idaapi
from lighthouse.context import LighthouseContext
from lighthouse.util.misc import plugin_resource
from lighthouse.integration.core import LighthouseCore
logger = logging.getLogger("Lighthouse.IDA.Integration")
#------------------------------------------------------------------------------
# Lighthouse IDA Integration
#------------------------------------------------------------------------------
class LighthouseIDA(LighthouseCore):
"""
Lighthouse UI Integration for IDA Pro.
"""
def __init__(self):
# menu entry icons
self._icon_id_xref = idaapi.BADADDR
self._icon_id_file = idaapi.BADADDR
self._icon_id_batch = idaapi.BADADDR
self._icon_id_overview = idaapi.BADADDR
# IDA ui hooks
self._ui_hooks = UIHooks(self)
# run initialization
super(LighthouseIDA, self).__init__()
def get_context(self, dctx=None, startup=True):
"""
Get the LighthouseContext object for a given database context.
NOTE: since IDA can only have one binary / IDB open at a time, the
dctx (database context) should always be 'None'.
"""
self.palette.warmup()
#
# there should only ever be 'one' disassembler / IDB context at any
# time for IDA. but if one does not exist yet, that means this is the
# first time the user has interacted with Lighthouse for this session
#
if dctx not in self.lighthouse_contexts:
# create a new 'context' representing this IDB
lctx = LighthouseContext(self, dctx)
if startup:
lctx.start()
# save the created ctx for future calls
self.lighthouse_contexts[dctx] = lctx
# return the lighthouse context object for this IDB
return self.lighthouse_contexts[dctx]
#--------------------------------------------------------------------------
# IDA Actions
#--------------------------------------------------------------------------
ACTION_LOAD_FILE = "lighthouse:load_file"
ACTION_LOAD_BATCH = "lighthouse:load_batch"
ACTION_COVERAGE_XREF = "lighthouse:coverage_xref"
ACTION_COVERAGE_OVERVIEW = "lighthouse:coverage_overview"
def _install_load_file(self):
"""
Install the 'File->Load->Code coverage file...' menu entry.
"""
# create a custom IDA icon
icon_path = plugin_resource(os.path.join("icons", "load.png"))
icon_data = open(icon_path, "rb").read()
self._icon_id_file = idaapi.load_custom_icon(data=icon_data)
# describe a custom IDA UI action
action_desc = idaapi.action_desc_t(
self.ACTION_LOAD_FILE, # The action name
"~C~ode coverage file...", # The action text
IDACtxEntry(self.interactive_load_file), # The action handler
None, # Optional: action shortcut
"Load individual code coverage file(s)", # Optional: tooltip
self._icon_id_file # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register load_file action with IDA")
# attach the action to the File-> dropdown menu
result = idaapi.attach_action_to_menu(
"File/Load file/", # Relative path of where to add the action
self.ACTION_LOAD_FILE, # The action ID (see above)
idaapi.SETMENU_APP # We want to append the action after ^
)
if not result:
RuntimeError("Failed action attach load_file")
logger.info("Installed the 'Code coverage file' menu entry")
def _install_load_batch(self):
"""
Install the 'File->Load->Code coverage batch...' menu entry.
"""
# create a custom IDA icon
icon_path = plugin_resource(os.path.join("icons", "batch.png"))
icon_data = open(icon_path, "rb").read()
self._icon_id_batch = idaapi.load_custom_icon(data=icon_data)
# describe a custom IDA UI action
action_desc = idaapi.action_desc_t(
self.ACTION_LOAD_BATCH, # The action name
"~C~ode coverage batch...", # The action text
IDACtxEntry(self.interactive_load_batch), # The action handler
None, # Optional: action shortcut
"Load and aggregate code coverage files", # Optional: tooltip
self._icon_id_batch # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register load_batch action with IDA")
# attach the action to the File-> dropdown menu
result = idaapi.attach_action_to_menu(
"File/Load file/", # Relative path of where to add the action
self.ACTION_LOAD_BATCH, # The action ID (see above)
idaapi.SETMENU_APP # We want to append the action after ^
)
if not result:
RuntimeError("Failed action attach load_batch")
logger.info("Installed the 'Code coverage batch' menu entry")
def _install_open_coverage_xref(self):
"""
Install the right click 'Coverage Xref' context menu entry.
"""
# create a custom IDA icon
icon_path = plugin_resource(os.path.join("icons", "batch.png"))
icon_data = open(icon_path, "rb").read()
self._icon_id_xref = idaapi.load_custom_icon(data=icon_data)
# describe a custom IDA UI action
action_desc = idaapi.action_desc_t(
self.ACTION_COVERAGE_XREF, # The action name
"Xrefs coverage sets...", # The action text
IDACtxEntry(self._pre_open_coverage_xref),# The action handler
None, # Optional: action shortcut
"List coverage sets containing this address", # Optional: tooltip
self._icon_id_xref # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register coverage_xref action with IDA")
self._ui_hooks.hook()
logger.info("Installed the 'Coverage Xref' menu entry")
def _install_open_coverage_overview(self):
"""
Install the 'View->Open subviews->Coverage Overview' menu entry.
"""
# create a custom IDA icon
icon_path = plugin_resource(os.path.join("icons", "overview.png"))
icon_data = open(icon_path, "rb").read()
self._icon_id_overview = idaapi.load_custom_icon(data=icon_data)
# describe a custom IDA UI action
action_desc = idaapi.action_desc_t(
self.ACTION_COVERAGE_OVERVIEW, # The action name
"~C~overage Overview", # The action text
IDACtxEntry(self.open_coverage_overview), # The action handler
None, # Optional: action shortcut
"Open database code coverage overview", # Optional: tooltip
self._icon_id_overview # Optional: the action icon
)
# register the action with IDA
result = idaapi.register_action(action_desc)
if not result:
RuntimeError("Failed to register open coverage overview action with IDA")
# attach the action to the View-> dropdown menu
result = idaapi.attach_action_to_menu(
"View/Open subviews/Hex dump", # Relative path of where to add the action
self.ACTION_COVERAGE_OVERVIEW, # The action ID (see above)
idaapi.SETMENU_INS # We want to insert the action before ^
)
if not result:
RuntimeError("Failed action attach to 'View/Open subviews' dropdown")
logger.info("Installed the 'Coverage Overview' menu entry")
def _uninstall_load_file(self):
"""
Remove the 'File->Load file->Code coverage file...' menu entry.
"""
# remove the entry from the File-> menu
result = idaapi.detach_action_from_menu(
"File/Load file/",
self.ACTION_LOAD_FILE
)
if not result:
return False
# unregister the action
result = idaapi.unregister_action(self.ACTION_LOAD_FILE)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_file)
self._icon_id_file = idaapi.BADADDR
logger.info("Uninstalled the 'Code coverage file' menu entry")
def _uninstall_load_batch(self):
"""
Remove the 'File->Load file->Code coverage batch...' menu entry.
"""
# remove the entry from the File-> menu
result = idaapi.detach_action_from_menu(
"File/Load file/",
self.ACTION_LOAD_BATCH
)
if not result:
return False
# unregister the action
result = idaapi.unregister_action(self.ACTION_LOAD_BATCH)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_batch)
self._icon_id_batch = idaapi.BADADDR
logger.info("Uninstalled the 'Code coverage batch' menu entry")
def _uninstall_open_coverage_xref(self):
"""
Remove the right click 'Coverage Xref' context menu entry.
"""
self._ui_hooks.unhook()
# unregister the action
result = idaapi.unregister_action(self.ACTION_COVERAGE_XREF)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_xref)
self._icon_id_xref = idaapi.BADADDR
logger.info("Uninstalled the 'Coverage Xref' menu entry")
def _uninstall_open_coverage_overview(self):
"""
Remove the 'View->Open subviews->Coverage Overview' menu entry.
"""
# remove the entry from the View-> menu
result = idaapi.detach_action_from_menu(
"View/Open subviews/Hex dump",
self.ACTION_COVERAGE_OVERVIEW
)
if not result:
return False
# unregister the action
result = idaapi.unregister_action(self.ACTION_COVERAGE_OVERVIEW)
if not result:
return False
# delete the entry's icon
idaapi.free_custom_icon(self._icon_id_overview)
self._icon_id_overview = idaapi.BADADDR
logger.info("Uninstalled the 'Coverage Overview' menu entry")
#--------------------------------------------------------------------------
# Helpers
#--------------------------------------------------------------------------
def _inject_ctx_actions(self, view, popup, view_type):
"""
Inject context menu entries into IDA's right click menus.
NOTE: This is only being used for coverage xref at this time, but
may host additional actions in the future.
"""
if view_type == idaapi.BWN_DISASMS:
idaapi.attach_action_to_popup(
view,
popup,
self.ACTION_COVERAGE_XREF, # The action ID (see above)
"Xrefs graph from...", # Relative path of where to add the action
idaapi.SETMENU_APP # We want to append the action after ^
)
def _pre_open_coverage_xref(self):
"""
Grab a contextual address before opening the coverage xref dialog.
"""
self.open_coverage_xref(idaapi.get_screen_ea())
#------------------------------------------------------------------------------
# IDA UI Helpers
#------------------------------------------------------------------------------
class IDACtxEntry(idaapi.action_handler_t):
"""
A minimal context menu entry class to utilize IDA's action handlers.
"""
def __init__(self, action_function):
idaapi.action_handler_t.__init__(self)
self.action_function = action_function
def activate(self, ctx):
"""
Execute the embedded action_function when this context menu is invoked.
"""
self.action_function()
return 1
def update(self, ctx):
"""
Ensure the context menu is always available in IDA.
"""
return idaapi.AST_ENABLE_ALWAYS
class UIHooks(idaapi.UI_Hooks):
"""
Hooks for IDA's UI subsystem.
At the moment, we are only using these to inject into IDA's right click
context menus (eg, coverage xrefs)
"""
def __init__(self, integration):
self.integration = integration
super(UIHooks, self).__init__()
def finish_populating_widget_popup(self, widget, popup):
"""
A right click menu is about to be shown. (IDA 7.0+)
"""
#
# if lighthouse hasn't been used yet, there's nothing to do. we also
# don't want this event to trigger the creation of a lighthouse
# context! so we should bail early in this case...
#
if not self.integration.lighthouse_contexts:
return 0
# inject any of lighthouse's right click context menu's into IDA
lctx = self.integration.get_context(None)
if lctx.director.coverage_names:
self.integration._inject_ctx_actions(widget, popup, idaapi.get_widget_type(widget))
# must return 0 for ida...
return 0
@@ -0,0 +1,94 @@
import time
import logging
import idaapi
from lighthouse.util.log import lmsg
from lighthouse.integration.ida_integration import LighthouseIDA
logger = logging.getLogger("Lighthouse.IDA.Loader")
#------------------------------------------------------------------------------
# Lighthouse IDA Loader
#------------------------------------------------------------------------------
#
# This file contains a stub 'plugin' class for Lighthouse as required by
# IDA Pro. Practically speaking, there should be little to *no* logic placed
# in this file because it is disassembler-specific.
#
# When IDA Pro is starting up, it will import all python files placed in its
# root plugin folder. It will then attempt to call PLUGIN_ENTRY() on each of
# the imported 'plugins'. We import PLUGIN_ENTRY into lighthouse_plugin.py
# so that IDA can see it.
#
# PLUGIN_ENTRY() is expected to return a plugin object (LighthouseIDAPlugin)
# derived from idaapi.plugin_t. IDA will register the plugin, and interface
# with the plugin object to load / unload the plugin at certain times, per
# its configuration (flags, hotkeys).
#
# There should be virtually no reason for you to modify this file.
#
def PLUGIN_ENTRY():
"""
Required plugin entry point for IDAPython Plugins.
"""
return LighthouseIDAPlugin()
class LighthouseIDAPlugin(idaapi.plugin_t):
"""
The IDA plugin stub for Lighthouse.
"""
#
# Plugin flags:
# - PLUGIN_MOD: Lighthouse is a plugin that may modify the database
# - PLUGIN_PROC: Load/unload Lighthouse when an IDB opens / closes
# - PLUGIN_HIDE: Hide Lighthouse from the IDA plugin menu
#
flags = idaapi.PLUGIN_PROC | idaapi.PLUGIN_MOD | idaapi.PLUGIN_HIDE
comment = "Code Coverage Explorer"
help = ""
wanted_name = "Lighthouse"
wanted_hotkey = ""
#--------------------------------------------------------------------------
# IDA Plugin Overloads
#--------------------------------------------------------------------------
def init(self):
"""
This is called by IDA when it is loading the plugin.
"""
try:
self._lighthouse = LighthouseIDA()
self._lighthouse.load()
except Exception as e:
lmsg("Failed to initialize Lighthouse")
logger.exception("Exception details:")
return idaapi.PLUGIN_KEEP
def run(self, arg):
"""
This is called by IDA when this file is loaded as a script.
"""
idaapi.warning("Lighthouse cannot be run as a script in IDA.")
def term(self):
"""
This is called by IDA when it is unloading the plugin.
"""
logger.debug("IDA term started...")
start = time.time()
logger.debug("-"*50)
try:
self._lighthouse.unload()
self._lighthouse = None
except Exception as e:
logger.exception("Failed to cleanly unload Lighthouse from IDA.")
end = time.time()
logger.debug("-"*50)
logger.debug("IDA term done... (%.3f seconds...)" % (end-start))
File diff suppressed because it is too large Load Diff
+9
View File
@@ -0,0 +1,9 @@
from .painter import DatabasePainter
from lighthouse.util.disassembler import disassembler
if disassembler.NAME == "IDA":
from .ida_painter import IDAPainter as CoveragePainter
elif disassembler.NAME == "BINJA":
from .binja_painter import BinjaPainter as CoveragePainter
else:
raise NotImplementedError("DISASSEMBLER-SPECIFIC SHIM MISSING")
@@ -0,0 +1,131 @@
import logging
import binaryninja
from binaryninja import HighlightStandardColor
from binaryninja.highlight import HighlightColor
from lighthouse.painting import DatabasePainter
from lighthouse.util.disassembler import disassembler
logger = logging.getLogger("Lighthouse.Painting.Binja")
#------------------------------------------------------------------------------
# Binary Ninja Painter
#------------------------------------------------------------------------------
class BinjaPainter(DatabasePainter):
"""
Asynchronous Binary Ninja database painter.
"""
def __init__(self, lctx, director, palette):
super(BinjaPainter, self).__init__(lctx, director, palette)
#--------------------------------------------------------------------------
# Paint Primitives
#--------------------------------------------------------------------------
#
# NOTE:
# due to the manner in which Binary Ninja implements basic block
# (node) highlighting, there is almost no need to paint individual
# instructions. for now we, will simply make the main instruction
# painting function a no-op's
#
def _paint_instructions(self, instructions):
self._action_complete.set()
def _clear_instructions(self, instructions):
bv = disassembler[self.lctx].bv
state = bv.begin_undo_actions()
for address in instructions:
for func in bv.get_functions_containing(address):
func.set_auto_instr_highlight(address, HighlightStandardColor.NoHighlightColor)
self._painted_partial -= set(instructions)
self._painted_instructions -= set(instructions)
self._action_complete.set()
if hasattr(bv, "forget_undo_actions"):
bv.forget_undo_actions(state)
else:
bv.commit_undo_actions(state)
def _partial_paint(self, bv, instructions, color):
for address in instructions:
for func in bv.get_functions_containing(address):
func.set_auto_instr_highlight(address, color)
self._painted_partial |= set(instructions)
self._painted_instructions |= set(instructions)
def _paint_nodes(self, node_addresses):
bv = disassembler[self.lctx].bv
db_coverage = self.director.coverage
db_metadata = self.director.metadata
state = bv.begin_undo_actions()
r, g, b, _ = self.palette.coverage_paint.getRgb()
color = HighlightColor(red=r, green=g, blue=b)
partial_nodes = set()
for node_address in node_addresses:
node_metadata = db_metadata.nodes.get(node_address, None)
node_coverage = db_coverage.nodes.get(node_address, None)
# read comment in ida_painter.py (self._paint_nodes)
if not (node_coverage and node_metadata):
self._msg_queue.put(self.MSG_ABORT)
node_addresses = node_addresses[:node_addresses.index(node_address)]
break
# special case for nodes that are only partially executed...
if node_coverage.instructions_executed != node_metadata.instruction_count:
partial_nodes.add(node_address)
self._partial_paint(bv, node_coverage.executed_instructions.keys(), color)
continue
for node in bv.get_basic_blocks_starting_at(node_address):
node.highlight = color
self._painted_nodes |= (set(node_addresses) - partial_nodes)
self._action_complete.set()
if hasattr(bv, "forget_undo_actions"):
bv.forget_undo_actions(state)
else:
bv.commit_undo_actions(state)
def _clear_nodes(self, node_addresses):
bv = disassembler[self.lctx].bv
db_metadata = self.director.metadata
state = bv.begin_undo_actions()
for node_address in node_addresses:
node_metadata = db_metadata.nodes.get(node_address, None)
# read comment in ida_painter.py (self._paint_nodes)
if not node_metadata:
self._msg_queue.put(self.MSG_ABORT)
node_addresses = node_addresses[:node_addresses.index(node_address)]
break
for node in bv.get_basic_blocks_starting_at(node_address):
node.highlight = HighlightStandardColor.NoHighlightColor
self._painted_nodes -= set(node_addresses)
self._action_complete.set()
if hasattr(bv, "forget_undo_actions"):
bv.forget_undo_actions(state)
else:
bv.commit_undo_actions(state)
def _refresh_ui(self):
pass
def _cancel_action(self, job):
pass
+474
View File
@@ -0,0 +1,474 @@
import struct
import ctypes
import logging
import functools
import idc
import idaapi
from idaapi import clr_abits, set_abits, netnode, set_node_info
from lighthouse.util import *
from lighthouse.util.disassembler import disassembler
from lighthouse.util.disassembler.ida_api import map_line2citem, map_line2node, lex_citem_indexes, hexrays_available
from lighthouse.painting import DatabasePainter
logger = logging.getLogger("Lighthouse.Painting.IDA")
#------------------------------------------------------------------------------
# MFF_NOWAIT Workaound
#------------------------------------------------------------------------------
#
# due to the asynchronous nature of the database painter core, we need
# to use IDA's execute_sync() with idaapi.MFF_WRITE to perform 'paint'
# actions (which modify the database).
#
# 1. the first issue is that a deadlock can occur when attempting to use
# execute_sync() with MFF_WRITE from a thread when IDA is in the process
# of closing. This would occur when a paint was in progress, and the user
# attempts to abruptly close the database.
#
# the solution to this is to use the MFF_NOWAIT flag with MFF_WRITE, which
# means execute_sync() is non-blocking. this avoids the deadlock that could
# occur between the main thread and the async (painting) thread on close.
#
# 2. but prior to IDA 7.0 SP1, there was a bug with execute_sync() that
# could cause IDA to abort() non-deterministically when using the
# MFF_NOWAIT flag. The abort manifests as a hard-crash of IDA.
#
# this is an issue that has haunted lighthouse since almost the beginning,
# causing a number of compatibility/stability issues. the bug was fixed in
# in IDA 7.0 SP1, but older versions of IDA are afflicted.
#
# this section of code constitutes some of the most fragile, convoluted,
# and regression prone code in lighthouse. through some miraculous feats
# of engineering, the solution below appears to safely resolve both of
# these problems for downlevel versions (IDA 6.8 --> 7.x)
#
from lighthouse.util.qt import QtCore
class ToMainthread(QtCore.QObject):
"""
A Qt object whose sole purpose is to execute code on the mainthread.
Below, we define a Qt signal called 'mainthread'. Any thread can emit() this
signal, where it will be handled in the main application thread.
"""
mainthread = QtCore.pyqtSignal(object)
def __init__(self):
super(ToMainthread, self).__init__()
#
# from any thread, one can call 'mainthread.emit(a_function)', passing
# in a callable object (a_function) which will be executed (through the
# lambda) on the main application thread.
#
self.mainthread.connect(lambda x: x())
def execute_paint(function):
"""
A function decorator to safely paint the IDA database from any thread.
"""
@functools.wraps(function)
def wrapper(*args, **kwargs):
#
# the first argument passed to this decorator will be the
# IDAPainter class instance
#
ida_painter = args[0]
#
# we wrap up the remaining args (and paint function) into a single
# packaged up callable object (a functools.partial)
#
ff = functools.partial(function, *args, **kwargs)
#
# if we are using a 'bugged' downlevel version of IDA, package another
# callable to 'synchronize' a database write. This callable will get
# passed to the main thread and executed through the Qt event loop.
#
# the execute_sync should technically happy in-line, avoiding the
# possibility of deadlocks or aborts as described above.
#
if idaapi.IDA_SDK_VERSION < 710:
fff = functools.partial(idaapi.execute_sync, ff, idaapi.MFF_WRITE)
ida_painter._signal.mainthread.emit(fff)
return idaapi.BADADDR
#
# in IDA 7.1, the MFF_NOWAIT bug is definitely fixed, so we can just
# use it to schedule our paint action ... as designed.
#
return idaapi.execute_sync(ff, idaapi.MFF_NOWAIT | idaapi.MFF_WRITE)
return wrapper
#------------------------------------------------------------------------------
# IDA Painter
#------------------------------------------------------------------------------
class IDAPainter(DatabasePainter):
"""
Asynchronous IDA database painter.
"""
def __init__(self, lctx, director, palette):
super(IDAPainter, self).__init__(lctx, director, palette)
self._streaming_instructions = True
self._idp_hooks = InstructionPaintHooks(director, palette)
self._vduis = {}
# see the MFF_NOWAIT workaround details above
self._signal = ToMainthread()
def terminate(self):
#
# IDA is either closing or simply switching databases... we should try
# to unhook our processor hooks so that artifacts of this painter do
# not carry over to the next IDB / session.
#
# if we don't do this, our current 'IDP' hooks will continue to fire
# once the next IDB is open. we don't want this, because a new painter
# will be spun up an it will install its own instance of hooks...
#
if self._idp_hooks:
self._idp_hooks.unhook()
self._idp_hooks = None
# spin down the painter as usual
super(IDAPainter, self).terminate()
def _notify_status_changed(self, status):
# enable / disable hook based on the painter being enabled or disabled
if status:
self._idp_hooks.hook()
if hexrays_available():
idaapi.install_hexrays_callback(self._hxe_callback)
else:
self._idp_hooks.unhook()
if hexrays_available():
idaapi.remove_hexrays_callback(self._hxe_callback)
# send the status changed signal...
super(IDAPainter, self)._notify_status_changed(status)
#------------------------------------------------------------------------------
# Paint Actions
#------------------------------------------------------------------------------
@execute_paint
def _paint_instructions(self, instructions):
"""
Paint instruction level coverage defined by the current database mapping.
NOTE: we now use 'streaming' mode for instructions rather than this.
"""
color = struct.pack("I", self.palette.coverage_paint+1)
for address in instructions:
set_abits(address, 0x40000)
nn = netnode(address)
nn.supset(20, color, 'A')
self._painted_instructions |= set(instructions)
self._action_complete.set()
@execute_paint
def _clear_instructions(self, instructions):
"""
Clear paint from the given instructions.
NOTE: we now use 'streaming' mode for instructions rather than this.
"""
for address in instructions:
clr_abits(address, 0x40000)
self._painted_instructions -= set(instructions)
self._action_complete.set()
@execute_paint
def _paint_nodes(self, node_addresses):
"""
Paint node level coverage defined by the current database mappings.
"""
db_coverage = self.director.coverage
db_metadata = self.director.metadata
# create a node info object as our vehicle for setting the node color
node_info = idaapi.node_info_t()
node_info.bg_color = self.palette.coverage_paint
node_flags = idaapi.NIF_BG_COLOR | idaapi.NIF_FRAME_COLOR
#
# loop through every node that we have coverage data for, painting them
# in the IDA graph view as applicable.
#
for node_address in node_addresses:
# retrieve all the necessary structures to paint this node
node_coverage = db_coverage.nodes.get(node_address, None)
functions = db_metadata.get_functions_by_node(node_address)
#
# due to the fact that multiple functions may 'share' a node,
# we need to go through and explicitly fetch the node metadata
# from each function when performing a paint.
#
# this is because each function will have a unique node_id in
# the target node_metadata(s)
#
node_metadatas = {}
for function in functions:
# attempt to safely fetch the node metadata from a function
node_metadata = function.nodes.get(node_address, None)
#
# this is possible if function is getting torn down. this is because
# we don't use locks. this just means it is time for us to bail as
# the metadata state is changing and the paint should be canceled
#
if not node_metadata:
node_metadatas = []
break
node_metadatas[function.address] = node_metadata
#
# if we did not get *everything* that we needed, then it is
# possible the database changed, or the coverage set changed...
#
# this is kind of what we get for not using locks :D but that's
# okay, just stop painting here and let the painter sort it out
#
if not (node_coverage and node_metadatas):
self._msg_queue.put(self.MSG_ABORT)
node_addresses = node_addresses[:node_addresses.index(node_address)]
break
# ignore nodes that are only partially executed
if node_coverage.instructions_executed != node_metadata.instruction_count:
continue
# do the *actual* painting o;f a single node instance
for function_address, node_metadata in iteritems(node_metadatas):
set_node_info(
function_address,
node_metadata.id,
node_info,
node_flags
)
self._painted_nodes |= set(node_addresses)
self._action_complete.set()
@execute_paint
def _clear_nodes(self, node_addresses):
"""
Clear paint from the given graph nodes.
"""
db_metadata = self.director.metadata
# create a node info object as our vehicle for resetting the node color
node_info = idaapi.node_info_t()
node_info.bg_color = idc.DEFCOLOR
node_flags = idaapi.NIF_BG_COLOR | idaapi.NIF_FRAME_COLOR
#
# loop through every node that we have metadata data for, clearing
# their paint (color) in the IDA graph view as applicable.
#
# read self._paint_nodes() comments for more info, the code below
# is very similar, sans the repetitive comments
#
for node_address in node_addresses:
functions = db_metadata.get_functions_by_node(node_address)
node_metadatas = {}
for function in functions:
node_metadata = function.nodes.get(node_address, None)
if not node_metadata:
node_metadatas = {}
break
node_metadatas[function.address] = node_metadata
# abort if something looks like it changed...
if not node_metadatas:
self._msg_queue.put(self.MSG_ABORT)
node_addresses = node_addresses[:node_addresses.index(node_address)]
break
# do the *actual* painting of a single node instance
for function_address, node_metadata in iteritems(node_metadatas):
set_node_info(
function_address,
node_metadata.id,
node_info,
node_flags
)
self._painted_nodes -= set(node_addresses)
self._action_complete.set()
@execute_paint
def _refresh_ui(self):
"""
Note that this has been decorated with @execute_paint (vs @execute_ui)
to help avoid deadlocking on exit.
"""
for vdui in self._vduis.values():
if vdui.valid():
vdui.refresh_ctext(False)
idaapi.refresh_idaview_anyway()
def _cancel_action(self, job_id):
if idaapi.IDA_SDK_VERSION < 710:
return
idaapi.cancel_exec_request(job_id)
#------------------------------------------------------------------------------
# Painting - HexRays (Decompilation / Source)
#------------------------------------------------------------------------------
def paint_hexrays(self, cfunc, db_coverage):
"""
Paint decompilation text for the given HexRays Window.
"""
logger.debug("Painting HexRays for 0x%X" % cfunc.entry_ea)
# more code-friendly, readable aliases
db_metadata = db_coverage._metadata
decompilation_text = cfunc.get_pseudocode()
#
# the objective here is to paint hexrays lines that are associated with
# our runtime data. unfortunately, there are very few API resources that
# link decompilation line numbers to anything (eg, citems, nodes, ea, etc)
#
# this means that we must build our own data relationships to draw from
#
#
# first, let's build a relationship between a given line of text, and the
# citems that contribute to it. the only way to do that (as I see it) is
# to lex citem ID's out of the decompiled output string
#
line2citem = map_line2citem(decompilation_text)
#
# now that we have some understanding of how citems contribute to each
# line of decompiled text, we can use this information to build a
# relationship that ties graph nodes (basic blocks) to individual lines.
#
line2node = map_line2node(cfunc, db_metadata, line2citem)
# great, now we have all the information we need to paint
#
# paint hexrays output
#
lines_painted = 0
# extract the node addresses that have been hit by our function's mapping data
executed_nodes = set(viewkeys(db_coverage.functions[cfunc.entry_ea].nodes))
#
# now we loop through every line_number of the decompiled text that claims
# to have a relationship with a graph node (basic block) and check to see
# if it contains a node our coverage has marked as executed
#
for line_number, line_nodes in iteritems(line2node):
#
# if there is any intersection of nodes on this line and the coverage
# data's set of executed nodes, we are inclined to color it
#
if line_nodes & executed_nodes:
decompilation_text[line_number].bgcolor = self.palette.coverage_paint
lines_painted += 1
#
# done painting from our mapping data
#
# if there was nothing painted yet, there's no point in continuing...
if not lines_painted:
return
#
# if we made it this far, we must have painted *some* lines inside the
# function. that means we should paint the function declaration, and
# header (variable declaration) lines as their execution will be implied
#
for line_number in xrange(0, cfunc.hdrlines):
decompilation_text[line_number].bgcolor = self.palette.coverage_paint
lines_painted += 1
def _hxe_callback(self, event, *args):
"""
HexRays event handler.
"""
# decompilation text generation is complete and it is about to be shown
if event == idaapi.hxe_text_ready:
# more code-friendly, readable aliases
vdui = args[0]
cfunc = vdui.cfunc
self._vduis[vdui.view_idx] = vdui
# if there's no coverage data for this function, there's nothing to do
if not cfunc.entry_ea in self.director.coverage.functions:
return 0
# paint the decompilation text for this function
self.paint_hexrays(cfunc, self.director.coverage)
# stop tracking vdui's if they close...
elif event == idaapi.hxe_close_pseudocode:
vdui = args[0]
self._vduis.pop(vdui.view_idx, None)
return 0
#------------------------------------------------------------------------------
# Instruction Paint Streaming (Processor Hooks)
#------------------------------------------------------------------------------
class InstructionPaintHooks(idaapi.IDP_Hooks):
"""
Hook IDA's processor callbacks to paint instructions on the fly.
"""
def __init__(self, director, palette):
super(InstructionPaintHooks, self).__init__()
self.director = director
self.palette = palette
def ev_get_bg_color(self, pcolor, ea):
if ea not in self.director.coverage.coverage:
return 0
bgcolor = ctypes.cast(int(pcolor), ctypes.POINTER(ctypes.c_int))
bgcolor[0] = self.palette.coverage_paint
return 1
+671
View File
@@ -0,0 +1,671 @@
import abc
import time
import logging
import threading
from lighthouse.util import *
from lighthouse.util.debug import catch_errors
from lighthouse.coverage import FunctionCoverage
logger = logging.getLogger("Lighthouse.Painting")
class DatabasePainter(object):
"""
An asynchronous disassembler database painting engine.
"""
__metaclass__ = abc.ABCMeta
MSG_ABORT = -1
MSG_TERMINATE = 0
MSG_REPAINT = 1
MSG_FORCE_REPAINT = 2
MSG_CLEAR = 3
MSG_FORCE_CLEAR = 4
MSG_REBASE = 5
def __init__(self, lctx, director, palette):
#----------------------------------------------------------------------
# Misc
#----------------------------------------------------------------------
self.lctx = lctx
self.palette = palette
self.director = director
self._enabled = False
self._started = False
#----------------------------------------------------------------------
# Painted State
#----------------------------------------------------------------------
#
# the coverage painter maintains its own internal record of what
# instruction addresses and graph nodes it has painted.
#
self._imagebase = BADADDR
self._painted_nodes = set()
self._painted_partial = set()
self._painted_instructions = set()
#
# these toggles will let the core painter (this class) know that it
# does not have to order explicit paints of instructions or nodes.
#
# this is because a disassembler-specific painter may be able to hook
# unique callbacks for painting graphs nodes or instructions
# 'on-the-fly' as they are rendered.
#
# these types of paints are ephermal and the most performant, they
# also will not need to be tracked by the painter.
#
self._streaming_nodes = False
self._streaming_instructions = False
#----------------------------------------------------------------------
# Async
#----------------------------------------------------------------------
#
# to communicate with the asynchronous painting thread, we send a
# a message via the thread event to signal a new paint request, and
# use the repaint_requested bool to interrupt a running paint request.
#
self._action_complete = threading.Event()
self._msg_queue = queue.Queue()
self._end_threads = False
#
# asynchronous database painting thread
#
self._painting_worker = threading.Thread(
target=self._async_database_painter,
name="DatabasePainter"
)
#----------------------------------------------------------------------
# Callbacks
#----------------------------------------------------------------------
# painter callbacks
self._status_changed_callbacks = []
# register for cues from the director
self.director.coverage_switched(self.repaint)
self.director.coverage_modified(self.repaint)
self.director.refreshed(self.check_rebase)
def start(self):
"""
Start the painter.
"""
if self._started:
return
# start the painter thread
self._painting_worker.start()
# all done
self._started = True
self.set_enabled(True)
#--------------------------------------------------------------------------
# Status
#--------------------------------------------------------------------------
@property
def enabled(self):
"""
Return the active painting status of the painter.
"""
return self._enabled
def set_enabled(self, enabled):
"""
Enable or disable the painter.
"""
# enabled/disabled status is not changing, ignore...
if enabled == self._enabled:
return
lmsg("%s painting..." % ("Enabling" if enabled else "Disabling"))
self._enabled = enabled
# notify listeners that the painter has been enabled/disabled
self._notify_status_changed(enabled)
# paint or clear the database based on the change of status...
if enabled:
self._send_message(self.MSG_REPAINT)
else:
self._send_message(self.MSG_CLEAR)
#--------------------------------------------------------------------------
# Commands
#--------------------------------------------------------------------------
def terminate(self):
"""
Cleanup & terminate the painter.
"""
self._end_threads = True
self._msg_queue.put(self.MSG_TERMINATE)
try:
self._painting_worker.join()
except RuntimeError: # thread was never started...
pass
# best effort to free up resources & improve interpreter spindown
del self._painted_nodes
del self._painted_instructions
del self._status_changed_callbacks
def repaint(self):
"""
Paint coverage defined by the current database mappings.
"""
self._send_message(self.MSG_REPAINT)
def force_repaint(self):
"""
Force a coverage repaint of the current database mappings.
"""
self._send_message(self.MSG_FORCE_REPAINT)
def force_clear(self):
"""
Clear all paint from the current database (based on metadata)
"""
self._send_message(self.MSG_FORCE_CLEAR)
self.set_enabled(False)
def check_rebase(self):
"""
Perform a rebase on the painted data cache (if necessary).
"""
self._send_message(self.MSG_REBASE)
self._send_message(self.MSG_REPAINT)
def _send_message(self, message):
"""
Queue a painter command for execution.
"""
if not self._started:
return
self._msg_queue.put(message)
#--------------------------------------------------------------------------
# Commands
#--------------------------------------------------------------------------
def status_changed(self, callback):
"""
Subscribe a callback for coverage switch events.
"""
register_callback(self._status_changed_callbacks, callback)
def _notify_status_changed(self, status):
"""
Notify listeners of a coverage switch event.
"""
notify_callback(self._status_changed_callbacks, status)
#--------------------------------------------------------------------------
# Paint Primitives
#--------------------------------------------------------------------------
@abc.abstractmethod
def _paint_instructions(self, instructions):
"""
Paint instruction coverage defined by the current database mapping.
"""
pass
@abc.abstractmethod
def _clear_instructions(self, instructions):
"""
Clear paint from the given instructions.
"""
pass
@abc.abstractmethod
def _paint_nodes(self, nodes_coverage):
"""
Paint node coverage defined by the current database mappings.
"""
pass
@abc.abstractmethod
def _clear_nodes(self, nodes_metadata):
"""
Clear paint from the given graph nodes.
"""
pass
@abc.abstractmethod
def _refresh_ui(self):
"""
Refresh the disassembler UI to ensure paint is rendered.
"""
pass
@abc.abstractmethod
def _cancel_action(self, job):
"""
Cancel a paint action using something representing its job.
"""
pass
#------------------------------------------------------------------------------
# Painting - High Level
#------------------------------------------------------------------------------
def _priority_paint(self):
"""
Immediately repaint regions of the database visible to the user.
Return True upon completion, or False if interrupted.
"""
if self._streaming_instructions and self._streaming_nodes:
return True
# get current function / user location in the database
cursor_address = disassembler[self.lctx].get_current_address()
# attempt to paint the functions in the immediate cursor vicinity
result = self._priority_paint_functions(cursor_address)
# force a refresh *now* as this is a prority painting
self._refresh_ui()
# all done
return result
def _priority_paint_functions(self, target_address, neighbors=1):
"""
Paint functions in the immediate vicinity of the given address.
This will paint both the instructions & graph nodes of defined functions.
"""
db_metadata = self.director.metadata
db_coverage = self.director.coverage
blank_coverage = FunctionCoverage(BADADDR)
# get the function metadata for the function closest to our cursor
function_metadata = db_metadata.get_closest_function(target_address)
if not function_metadata:
return False
# select the range of functions around us that we would like to paint
func_num = db_metadata.get_function_index(function_metadata.address)
func_num_start = max(func_num - neighbors, 0)
func_num_end = min(func_num + neighbors + 1, len(db_metadata.functions) - 1)
# repaint the specified range of functions
for current_num in xrange(func_num_start, func_num_end):
# get the next function to paint
function_metadata = db_metadata.get_function_by_index(current_num)
if not function_metadata:
continue
# get the function coverage data for the target address
function_address = function_metadata.address
function_coverage = db_coverage.functions.get(function_address, blank_coverage)
if not self._streaming_nodes:
# clear nodes
must_clear = sorted(set(function_metadata.nodes) - set(function_coverage.nodes))
self._action_complete.clear()
self._clear_nodes(must_clear)
self._action_complete.wait()
# paint nodes
must_paint = sorted(function_coverage.nodes)
self._action_complete.clear()
self._paint_nodes(must_paint)
self._action_complete.wait()
if not self._streaming_instructions:
# clear instructions
must_clear = sorted(function_metadata.instructions - function_coverage.instructions)
self._action_complete.clear()
self._clear_instructions(must_clear)
self._action_complete.wait()
# paint instructions
must_paint = sorted(function_coverage.instructions)
self._action_complete.clear()
self._paint_instructions(must_paint)
self._action_complete.wait()
# paint finished successfully
return True
def _paint_database(self):
"""
Repaint the current database based on the current state.
"""
logger.debug("Painting database...")
# more code-friendly, readable aliases (db_XX == database_XX)
db_coverage = self.director.coverage
db_metadata = self.director.metadata
start = time.time()
#------------------------------------------------------------------
# initialize imagebase if it hasn't been already...
if self._imagebase == BADADDR:
self._imagebase = db_metadata.imagebase
# immediately paint user-visible regions of the database
if not self._priority_paint():
return False # a repaint was requested
#
# if the painter is not capable of 'streaming' the coverage paint,
# then we must explicitly paint the instructions & nodes here
#
if not self._streaming_instructions:
#
# TODO: 'partially painted nodes' might be a little funny / not
# working correctly in IDA if we ever disable instruction streaming...
#
# compute the painted instructions that will not get painted over
stale_instr = self._painted_instructions - db_coverage.coverage
stale_instr |= (self._painted_partial - db_coverage.partial_instructions)
# clear old instruction paint
if not self._async_action(self._clear_instructions, stale_instr):
return False # a repaint was requested
# paint new instructions
new_instr = sorted(db_coverage.coverage - self._painted_instructions)
if not self._async_action(self._paint_instructions, new_instr):
return False # a repaint was requested
if not self._streaming_nodes:
# compute the painted nodes that will not get painted over
stale_nodes = self._painted_nodes - viewkeys(db_coverage.nodes)
stale_nodes |= db_coverage.partial_nodes
# clear old node paint
if not self._async_action(self._clear_nodes, stale_nodes):
return False # a repaint was requested
# paint new nodes
new_nodes = sorted(viewkeys(db_coverage.nodes) - self._painted_nodes)
if not self._async_action(self._paint_nodes, new_nodes):
return False # a repaint was requested
#------------------------------------------------------------------
end = time.time()
logger.debug(" - Painting took %.2f seconds" % (end - start))
# paint finished successfully
return True
def _clear_database(self):
"""
Clear all paint from the current database using the known paint state.
"""
logger.debug("Clearing database paint...")
start = time.time()
#------------------------------------------------------------------
db_metadata = self.director.metadata
# clear all instructions
if not self._streaming_instructions:
if not self._async_action(self._clear_instructions, self._painted_instructions):
return False # a repaint was requested
# clear all nodes
if not self._streaming_nodes:
if not self._async_action(self._clear_nodes, self._painted_nodes):
return False # a repaint was requested
#------------------------------------------------------------------
end = time.time()
logger.debug(" - Database paint cleared in %.2f seconds..." % (end-start))
# sanity checks...
assert self._painted_nodes == set()
assert self._painted_instructions == set()
# paint finished successfully
return True
def _force_paint_database(self):
"""
Forcibly repaint the database.
"""
db_metadata = self.director.metadata
text = "Repainting the database..."
logger.debug(text)
is_modal = bool(disassembler.NAME != "IDA")
disassembler.execute_ui(disassembler.show_wait_box)(text, False)
start = time.time()
#------------------------------------------------------------------
# discard current / known paint state
self._painted_nodes = set()
self._painted_partial = set()
self._painted_instructions = set()
# paint the database...
self._paint_database()
#------------------------------------------------------------------
end = time.time()
logger.debug(" - Database repainted in %.2f seconds..." % (end-start))
time.sleep(.2) # XXX: this seems to fix a bug where the waitbox doesn't close if the paint is too fast??
disassembler.execute_ui(disassembler.hide_wait_box)()
# paint finished successfully
return True
def _force_clear_database(self):
"""
Forcibly clear the paint from all known database addresses.
"""
db_metadata = self.director.metadata
text = "Forcibly clearing all paint from database..."
logger.debug(text)
#
# NOTE: forcefully clearing the database of paint can take a long time
# in certain cases, so we want to block the user from doing anything
# to the database while we're working.
#
# we will pop up a waitbox to block them, but we have to be careful as
# a *modal* waitbox will conflict with IDA's processing of MFF_WRITE
# requests making it wait for the waitbox to close before processing
#
# therefore, we put in a little bodge wire here to make sure the
# waitbox is *not* modal for IDA... but will be in the normal case.
# it also helps that IDA will be busy processing our 'write' requests,
# so the UI will be mostly frozen to the user anyway!
#
is_modal = bool(disassembler.NAME != "IDA")
disassembler.execute_ui(disassembler.show_wait_box)(text, is_modal)
start = time.time()
#------------------------------------------------------------------
self._action_complete.clear()
self._clear_instructions(sorted(db_metadata.instructions))
self._action_complete.wait()
self._action_complete.clear()
self._clear_nodes(sorted(db_metadata.nodes))
self._action_complete.wait()
#------------------------------------------------------------------
end = time.time()
logger.debug(" - Database paint cleared in %.2f seconds..." % (end-start))
time.sleep(.2) # XXX: this seems to fix a bug where the waitbox doesn't close if the clear is too fast??
disassembler.execute_ui(disassembler.hide_wait_box)()
# paint finished successfully
return True
def _rebase_database(self):
"""
Rebase the active database paint.
TODO/XXX: there may be some edgecases where painting can be wrong if
a rebase occurs while the painter is running.
"""
db_metadata = self.director.metadata
# a rebase has not occurred
if not db_metadata.cached or (db_metadata.imagebase == self._imagebase):
return False
# compute the offset of the rebase
rebase_offset = db_metadata.imagebase - self._imagebase
# rebase the cached addresses of what we have painted
self._painted_nodes = set([address+rebase_offset for address in self._painted_nodes])
self._painted_instructions = set([address+rebase_offset for address in self._painted_instructions])
self._imagebase = db_metadata.imagebase
# a rebase has been observed
return True
#--------------------------------------------------------------------------
# Asynchronous Painting
#--------------------------------------------------------------------------
@catch_errors
def _async_database_painter(self):
"""
Asynchronous database painting worker loop.
"""
logger.debug("Starting DatabasePainter thread...")
#
# Asynchronous Database Painting Loop
#
while not self._end_threads:
# wait for the next command to come through
action = self._msg_queue.get()
# repaint the database based on the current state
if action == self.MSG_REPAINT:
result = self._paint_database()
# forcibly repaint the database based on the current state
elif action == self.MSG_FORCE_REPAINT:
result = self._force_paint_database()
# clear database base on the current state
elif action == self.MSG_CLEAR:
result = self._clear_database()
# clear all possible database paint
elif action == self.MSG_FORCE_CLEAR:
result = self._force_clear_database()
# check for a rebase of the painted data
elif action == self.MSG_REBASE:
result = self._rebase_database()
# thrown internally to escape a stale paint, just ignore
elif action == self.MSG_ABORT:
continue
# spin down the painting thread (this thread)
elif action == self.MSG_TERMINATE:
break
# unknown command
else:
logger.error("UNKNOWN COMMAND! %s" % str(action))
break
# refresh the UI to ensure paint changes are rendered
self._refresh_ui()
# thread exit
logger.debug("Exiting DatabasePainter thread...")
def _async_action(self, paint_action, work_iterable):
"""
Split a normal paint routine into interruptable chunks.
Internal routine for asynchrnous painting.
"""
CHUNK_SIZE = 1500 # somewhat arbitrary
# split the given nodes into multiple paints
for work_chunk in chunks(list(work_iterable), CHUNK_SIZE):
#
# reset the paint event signal so that it is ready for the next
# paint request. it will let us know when the asynchrnous paint
# action has completed in the IDA main thread
#
self._action_complete.clear()
#
# paint or unpaint a chunk of 'work' (nodes, or instructions) with
# the given paint function (eg, paint_nodes, clear_instructions)
#
paint_job = paint_action(work_chunk)
#
# wait for the asynchronous paint event to complete or a signal that
# we should end this thread (via end_threads)
#
while not (self._action_complete.wait(timeout=0.2) or self._end_threads):
continue
#
# our end_threads signal/bool can only originate from the main IDA
# thread (plugin termination). we make the assumption that no more
# MFF_WRITE requests (eg, 'paint_action') will get processed.
#
# we do a best effort to cancel the in-flight job (just in case)
# and return so we can exit the thread.
#
if self._end_threads:
self._cancel_action(paint_job)
return False
#
# the operation has been interrupted by a repaint request, bail
# immediately so that we can process the next repaint
#
if not self._msg_queue.empty():
return False
# operation completed successfully
return True
+1
View File
@@ -0,0 +1 @@
from .coverage_reader import CoverageReader
+114
View File
@@ -0,0 +1,114 @@
import abc
class CoverageFile(object):
"""
Templated class for Lighthouse-compatible code coverage file reader.
"""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def __init__(self, filepath=None):
self.filepath = filepath
self.modules = {}
self._parse()
#--------------------------------------------------------------------------
# Parsing Routines
#--------------------------------------------------------------------------
@abc.abstractmethod
def _parse(self):
"""
Load and parse coverage data from the file defined by self.filepath
Within this function, a custom CoverageFile is expected to attempt to
parse the coverage file from disk. If the coverage file does not appear
to match the format expected by this parser -- that is okay.
Should this parser crash and burn, the CoverageReader will simply move
on to the next available parser and discard this attempt.
This function should *only* parse & categorize the coverage data that
it loads from disk. If this function returns without error, the
CoverageReader will attempt to call one of the get() functions later
to retrieve the data you have loaded.
The best coverage file formats will contain some sort of mapping
for the coverage data that ties it to a module or binary that was in
the instrumented process space.
If this mapping in known, then this function should strive to store
the coverage data in the self.modules dictionary, where
self.modules[module_name] = [ coverage_addresses ]
"""
raise NotImplementedError("Coverage parser not implemented")
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
#
# if you are writing a parser for a custom coverage file format, your
# parser is *REQUIRED* to implement one of the following routines.
#
# the CoverageReader well attempt to retrieve parsed data from this class
# using one of the function below.
#
def get_addresses(self, module_name=None):
"""
Return coverage data for the named module as absolute addresses.
If no name is given / available via self.modules, the trace is assumed
to be a an ABSOLUTE ADDRESS TRACE.
These are arugably the least flexible kind of traces available, but are
still provided as an option. This fuction should return a list of
integers representing absolute coverage addresses that match the open
disassembler database...
coverage_addresses = [address, address1, address2, ...]
"""
raise NotImplementedError("Absolute addresses not supported by this log format")
def get_offsets(self, module_name):
"""
Return coverage data for the named module as relative offets.
This function should return a list of integers representing the
relative offset of an executed instruction OR basic block from the
base of the requested module (module_name).
It is *okay* to return an instruction trace, OR a basic block trace
from thin function. Lighthoue will automatically detect basic block
based traces and 'explode' them into instruction traces.
coverage_data = [offset, offset2, offset3, ...]
"""
raise NotImplementedError("Relative addresses not supported by this log format")
def get_offset_blocks(self, module_name):
"""
Return coverage data for the named module in block form.
This function should return a list of tuples representing the coverage
for the requested module (module_name). The tuples must be in the form
of (offset, size).
offset - a relative offset from the module_name base address
size - the size of the instruction, block, or sequence executed
eg, if a basic block of 24 bytes in length at kernel32.dll+0x4182 was
executed, its tuple would be (0x4182, 24).
The complete list coverage data returned by thin function should be in
the following form:
coverage_data = [(offset, size), (offset1, size1), ...]
"""
raise NotImplementedError("Block form not supported by this log format")
@@ -0,0 +1,140 @@
import os
import sys
import inspect
import logging
import traceback
from .coverage_file import CoverageFile
from lighthouse.util.python import iteritems
from lighthouse.exceptions import CoverageParsingError
logger = logging.getLogger("Lighthouse.Reader")
MODULES_DIRECTORY = os.path.join(os.path.dirname(os.path.realpath(__file__)), "parsers")
class CoverageReader(object):
"""
Middleware to automatically parse and load different coverage file formats.
This class will dynamically load and make use of coverage file parsers
that subclass from the CoverageFile abstraction and live within the
reader's 'parsers' folder.
This should allow end-users to write parsers for custom coverage file
format without having to modify any of Lighthouse's existing code (ideally)
"""
def __init__(self):
self._installed_parsers = {}
self._import_parsers()
def open(self, filepath):
"""
Open and parse a coverage file from disk.
Returns a CoverageFile on success, or raises CoverageParsingError on failure.
"""
coverage_file = None
parse_failures = {}
# attempt to parse the given coverage file with each available parser
for name, parser in iteritems(self._installed_parsers):
logger.debug("Attempting parse with '%s'" % name)
# attempt to open/parse the coverage file with the given parser
try:
coverage_file = parser(filepath)
break
# log the exceptions for each parse failure
except Exception as e:
parse_failures[name] = traceback.format_exc()
logger.debug("| Parse FAILED - " + str(e))
#logger.exception("| Parse FAILED")
#
# if *all* the coverage file parsers failed, raise an exception with
# information for each failure (for debugging)
#
if not coverage_file:
raise CoverageParsingError(filepath, parse_failures)
# successful parse
logger.debug("| Parse OKAY")
return coverage_file
def _import_parsers(self):
"""
Scan and import coverage file parsers from the 'parsers' directory.
"""
target_subclass = CoverageFile
ignored_files = ["__init__.py"]
# loop through all the files in the parsers folder
for filename in os.listdir(MODULES_DIRECTORY):
# ignore specified files, and anything not *.py
if filename in ignored_files or filename.endswith(".py") == False:
continue
# attempt to load a CoverageFile format from the current *.py file
logger.debug("| Searching file %s" % filename)
parser_file = filename[:-3]
parser_class = self._locate_subclass(parser_file, target_subclass)
if not parser_class:
logger.warning("| - No object subclassing from %s found in %s..." \
% (target_subclass.__name__, parser_file))
continue
# instantiate and add the parser to our dict of imported parsers
logger.debug("| | Found %s" % parser_class.__name__)
self._installed_parsers[parser_class.__name__] = parser_class
logger.debug("+- Done dynamically importing parsers")
# return the number of modules successfully imported
return self._installed_parsers
def _locate_subclass(self, module_file, target_subclass):
"""
Return the first matching target_subclass in module_file.
This function is used to scan a specific file (module_file) in the
Lighthouse parsers directory for class definitions that subclass from
target_subclass.
We use this to dynmically import, locate, and return objects that are
utilizing our CoverageFile abstraction.
"""
module = None
module_class = None
# attempt to import the given filepath as a python module
try:
module = __import__("lighthouse.reader.parsers." + module_file, globals(), locals(), ['object'])
except Exception as e:
logger.exception("| - Parser import failed")
return None
#
# inspect the module for any classes that subclass from target_subclass
# eg: target_subclass == CoverageFile
#
class_members = inspect.getmembers(module, inspect.isclass)
for a_class in class_members:
# does the current class definition we're inspecting subclass
# from target_subclass? if so, it is a match
try:
if a_class[1].__bases__[0] == target_subclass:
module_class = a_class[1]
break
# this class does not subclass / etc / not interesting / ignore it
except IndexError as e:
pass
# return discovered parser or None
return module_class
@@ -1,99 +1,121 @@
#!/usr/bin/python
import os
import re
import sys
import mmap
import struct
import collections
from ctypes import *
#
# I know people like to use this parser in their own projects, so this
# if/def makes it compatible with being imported or used outside Lighthouse
#
try:
from lighthouse.exceptions import CoverageMissingError
from lighthouse.reader.coverage_file import CoverageFile
g_lighthouse = True
except ImportError as e:
CoverageFile = object
g_lighthouse = False
#------------------------------------------------------------------------------
# drcov log parser
# DynamoRIO Drcov Log Parser
#------------------------------------------------------------------------------
class DrcovData(object):
class DrcovData(CoverageFile):
"""
A drcov log parser.
"""
def __init__(self, filepath=None):
# original filepath
def __init__(self, filepath=None):
self.filepath = filepath
# drcov header attributes
self.version = 0
self.flavor = None
self.flavor = None
# drcov module table
self.module_table_count = 0
self.module_table_count = 0
self.module_table_version = 0
self.modules = []
self.modules = {}
# drcov basic block data
self.bb_table_count = 0
self.bbs = []
self.bb_table_count = 0
self.bb_table_is_binary = True
self.basic_blocks = []
# parse the given filepath
self._parse_drcov_file(filepath)
# parse
if g_lighthouse:
super(DrcovData, self).__init__(filepath)
else:
self._parse()
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
def get_module(self, module_name, fuzzy=True):
def get_offsets(self, module_name):
"""
Get a module by its name.
Note that this is a 'fuzzy' lookup by default.
Return coverage data as basic block offsets for the named module.
"""
modules = self.modules.get(module_name, [])
if not modules:
return []
# fuzzy module name lookup
if fuzzy:
#
# I don't know if this should ever actually trigger, but if it does,
# it is a strange testcase to collect coverage against. It means that
# maybe the target library/module was loaded, unloaded, and reloaded?
#
# if someone ever actally triggers this, we can look into it :S
#
# attempt lookup using case-insensitive filename
for module in self.modules:
if module_name.lower() in module.filename.lower():
return module
if self.version > 2:
assert all(module.containing_id == modules[0].id for module in modules)
#
# no hits yet... let's cleave the extension from the given module
# name (if present) and try again
#
# extract the unique module ids that we need to collect blocks for
mod_ids = [module.id for module in modules]
if "." in module_name:
module_name = module_name.split(".")[0]
# loop through the coverage data and filter out data for the target ids
coverage_blocks = [bb.start for bb in self.bbs if bb.mod_id in mod_ids]
# attempt lookup using case-insensitive filename without extension
for module in self.modules:
if module_name.lower() in module.filename.lower():
return module
# return the filtered coverage blocks
return coverage_blocks
def get_offset_blocks(self, module_name):
"""
Return coverage data as basic blocks (offset, size) for the named module.
"""
modules = self.modules.get(module_name, [])
if not modules:
return []
# NOTE: see comment in get_offsets() for more info...
if self.version > 2:
assert all(module.containing_id == modules[0].id for module in modules)
# extract the unique module ids that we need to collect blocks for
mod_ids = [module.id for module in modules]
# loop through the coverage data and filter out data for the target ids
if self.version < 3:
coverage_blocks = [(bb.start, bb.size) for bb in self.bbs if bb.mod_id in mod_ids]
#
# drcov version 3 does not include the 'preferred' / sub-module base
# in the bb offset, so we must add that base offset before returning
# the block offsets to correctly normalize things
#
# it's unclear if the preferred_base for given sub-module segments
# will always be correct, so we opt to simply use the first segment
# in a given module as the base to compute the known runtime offset
#
# strict lookup
else:
for module in self.modules:
if module_name == module.filename:
return module
# no matching module exists
return None
def get_blocks_by_module(self, module_name):
"""
Extract coverage blocks pertaining to the named module.
"""
# locate the coverage that matches the given module_name
module = self.get_module(module_name)
# if we fail to find a module that matches the given name, bail
if not module:
raise ValueError("Failed to find module '%s' in coverage data" % module_name)
# extract module id for speed
mod_id = module.id
# loop through the coverage data and filter out data for only this module
coverage_blocks = [(bb.start, bb.size) for bb in self.basic_blocks if bb.mod_id == mod_id]
mod_bases = dict([(module.id, module.start - modules[0].start) for module in modules])
coverage_blocks = [(mod_bases[bb.mod_id] + bb.start, bb.size) for bb in self.bbs if bb.mod_id in mod_ids]
# return the filtered coverage blocks
return coverage_blocks
@@ -102,21 +124,15 @@ class DrcovData(object):
# Parsing Routines - Top Level
#--------------------------------------------------------------------------
def _parse_drcov_file(self, filepath):
def _parse(self):
"""
Parse drcov coverage from the given log file.
"""
with open(filepath, "rb") as f:
with open(self.filepath, "rb") as f:
self._parse_drcov_header(f)
self._parse_module_table(f)
self._parse_bb_table(f)
def _parse_drcov_data(self, drcov_data):
"""
Parse drcov coverage from the given data blob.
"""
pass # TODO
#--------------------------------------------------------------------------
# Parsing Routines - Internals
#--------------------------------------------------------------------------
@@ -128,15 +144,15 @@ class DrcovData(object):
# parse drcov version from log
# eg: DRCOV VERSION: 2
version_line = f.readline().strip()
version_line = f.readline().decode('utf-8').strip()
self.version = int(version_line.split(":")[1])
# parse drcov flavor from log
# eg: DRCOV FLAVOR: drcov
flavor_line = f.readline().strip()
flavor_line = f.readline().decode('utf-8').strip()
self.flavor = flavor_line.split(":")[1]
assert self.version == 2, "Only drcov version 2 log files supported"
assert self.version == 2 or self.version == 3, "Only drcov versions 2 and 3 log files supported"
def _parse_module_table(self, f):
"""
@@ -156,13 +172,13 @@ class DrcovData(object):
eg: 'Module Table: 11'
Format used in DynamoRIO v7.0.0-RC1 (and hopefully above)
eg: 'Module Table: version 2, count 11'
eg: 'Module Table: version X, count 11'
"""
# parse module table 'header'
# eg: Module Table: version 2, count 11
header_line = f.readline().strip()
header_line = f.readline().decode('utf-8').strip()
field_name, field_data = header_line.split(": ")
#assert field_name == "Module Table"
@@ -191,6 +207,8 @@ class DrcovData(object):
data_name, version = version_data.split(" ")
#assert data_name == "version"
self.module_table_version = int(version)
if not self.module_table_version in [2, 3, 4, 5]:
raise ValueError("Unsupported (new?) drcov log format...")
# parse module count in table from 'count Y'
data_name, count = count_data.split(" ")
@@ -203,15 +221,33 @@ class DrcovData(object):
-------------------------------------------------------------------
Format used in DynamoRIO v6.1.1 through 6.2.0
DynamoRIO v6.1.1, table version 1:
eg: (Not present)
Format used in DynamoRIO v7.0.0-RC1 (and hopefully above)
DynamoRIO v7.0.0-RC1, table version 2:
Windows:
'Columns: id, base, end, entry, checksum, timestamp, path'
Mac/Linux:
'Columns: id, base, end, entry, path'
DynamoRIO v7.0.17594B, table version 3:
Windows:
'Columns: id, containing_id, start, end, entry, checksum, timestamp, path'
Mac/Linux:
'Columns: id, containing_id, start, end, entry, path'
DynamoRIO v7.0.17640, table version 4:
Windows:
'Columns: id, containing_id, start, end, entry, offset, checksum, timestamp, path'
Mac/Linux:
'Columns: id, containing_id, start, end, entry, offset, path'
DynamoRIO v10.0.19734, table version 5:
Windows:
'Columns: id, containing_id, start, end, entry, offset, preferred_base, checksum, timestamp, path'
Mac/Linux:
'Columns: id, containing_id, start, end, entry, offset, preferred_base, path'
"""
# NOTE/COMPAT: there is no 'Columns' line for the v1 table...
@@ -220,7 +256,7 @@ class DrcovData(object):
# parse module table 'columns'
# eg: Columns: id, base, end, entry, checksum, timestamp, path
column_line = f.readline().strip()
column_line = f.readline().decode('utf-8').strip()
field_name, field_data = column_line.split(": ")
#assert field_name == "Columns"
@@ -233,11 +269,14 @@ class DrcovData(object):
"""
Parse drcov log modules in the module table from filestream.
"""
modules = collections.defaultdict(list)
# loop through each *expected* line in the module table and parse it
for i in xrange(self.module_table_count):
module = DrcovModule(f.readline().strip(), self.module_table_version)
self.modules.append(module)
for i in range(self.module_table_count):
module = DrcovModule(f.readline().decode('utf-8').strip(), self.module_table_version)
modules[module.filename].append(module)
self.modules = modules
def _parse_bb_table(self, f):
"""
@@ -253,7 +292,7 @@ class DrcovData(object):
# parse basic block table 'header'
# eg: BB Table: 2792 bbs
header_line = f.readline().strip()
header_line = f.readline().decode('utf-8').strip()
field_name, field_data = header_line.split(": ")
#assert field_name == "BB Table"
@@ -264,18 +303,19 @@ class DrcovData(object):
# peek at the next few bytes to determine if this is a binary bb table.
# An ascii bb table will have the line: 'module id, start, size:'
token = "module id"
token = b"module id"
saved_position = f.tell()
# is this an ascii table?
if f.read(len(token)) == token:
self.bb_table_is_binary = False
raise ValueError("ASCII DrCov logs are not supported at this time.")
# nope! binary table, seek back to the start of the table
# nope! binary table
else:
self.bb_table_is_binary = True
f.seek(saved_position)
# seek back to the start of the table
f.seek(saved_position)
def _parse_bb_table_entries(self, f):
"""
@@ -283,10 +323,38 @@ class DrcovData(object):
"""
# allocate the ctypes structure array of basic blocks
self.basic_blocks = (DrcovBasicBlock * self.bb_table_count)()
self.bbs = (DrcovBasicBlock * self.bb_table_count)()
# read the basic block entries directly into the newly allocated array
f.readinto(self.basic_blocks)
# read binary basic block entries directly into the newly allocated array
if self.bb_table_is_binary:
f.readinto(self.bbs)
# parse the plaintext basic block entries one by one
else:
self._parse_bb_table_text_entries(f)
def _parse_bb_table_text_entries(self, f):
"""
Parse drcov log basic block table text entries from filestream.
"""
table_header = f.readline().decode('utf-8').strip()
if table_header != "module id, start, size:":
raise ValueError("Invalid BB header: %r" % table_header)
pattern = re.compile(r"^module\[\s*(?P<mod>[0-9]+)\]\:\s*(?P<start>0x[0-9a-fA-F]+)\,\s*(?P<size>[0-9]+)$")
for i, bb in enumerate(self.bbs):
text_entry = f.readline().decode('utf-8').strip()
if not text_entry:
continue
match = pattern.match(text_entry)
if not match:
raise ValueError("Invalid BB entry: %r" % text_entry)
bb.start = int(match.group("start"), 16)
bb.size = int(match.group("size"), 10)
bb.mod_id = int(match.group("mod"), 10)
#------------------------------------------------------------------------------
# drcov module parser
@@ -308,10 +376,20 @@ class DrcovModule(object):
self.timestamp = 0
self.path = ""
self.filename = ""
self.containing_id = 0
# parse the module
self._parse_module(module_data, version)
@property
def start(self):
"""
Compatability alias for the module base.
DrCov table version 2 --> 3 changed this paramter name base --> start.
"""
return self.base
def _parse_module(self, module_line, version):
"""
Parse a module table entry.
@@ -323,6 +401,12 @@ class DrcovModule(object):
self._parse_module_v1(data)
elif version == 2:
self._parse_module_v2(data)
elif version == 3:
self._parse_module_v3(data)
elif version == 4:
self._parse_module_v4(data)
elif version == 5:
self._parse_module_v5(data)
else:
raise ValueError("Unknown module format (v%u)" % version)
@@ -333,7 +417,7 @@ class DrcovModule(object):
self.id = int(data[0])
self.size = int(data[1])
self.path = str(data[2])
self.filename = os.path.basename(self.path)
self.filename = os.path.basename(self.path.replace('\\', os.sep))
def _parse_module_v2(self, data):
"""
@@ -348,7 +432,59 @@ class DrcovModule(object):
self.timestamp = int(data[5], 16)
self.path = str(data[-1])
self.size = self.end-self.base
self.filename = os.path.basename(self.path)
self.filename = os.path.basename(self.path.replace('\\', os.sep))
def _parse_module_v3(self, data):
"""
Parse a module table v3 entry.
"""
self.id = int(data[0])
self.containing_id = int(data[1])
self.base = int(data[2], 16)
self.end = int(data[3], 16)
self.entry = int(data[4], 16)
if len(data) > 7: # Windows Only
self.checksum = int(data[5], 16)
self.timestamp = int(data[6], 16)
self.path = str(data[-1])
self.size = self.end-self.base
self.filename = os.path.basename(self.path.replace('\\', os.sep))
def _parse_module_v4(self, data):
"""
Parse a module table v4 entry.
"""
self.id = int(data[0])
self.containing_id = int(data[1])
self.base = int(data[2], 16)
self.end = int(data[3], 16)
self.entry = int(data[4], 16)
self.offset = int(data[5], 16)
if len(data) > 8: # Windows Only
self.checksum = int(data[6], 16)
self.timestamp = int(data[7], 16)
self.path = str(data[-1])
self.size = self.end-self.base
self.filename = os.path.basename(self.path.replace('\\', os.sep))
def _parse_module_v5(self, data):
"""
Parse a module table v5 entry.
"""
self.id = int(data[0])
self.containing_id = int(data[1])
self.base = int(data[2], 16)
self.end = int(data[3], 16)
self.entry = int(data[4], 16)
self.offset = int(data[5], 16)
self.preferred_base = int(data[6], 16)
if len(data) > 8: # Windows Only
self.checksum = int(data[7], 16)
self.timestamp = int(data[8], 16)
self.path = str(data[-1])
self.size = self.end-self.base
self.filename = os.path.basename(self.path.replace('\\', os.sep))
#------------------------------------------------------------------------------
# drcov basic block parser
@@ -387,10 +523,11 @@ if __name__ == "__main__":
# base usage
if argc < 2:
print "usage: %s <coverage filename>" % os.path.basename(sys.argv[0])
print("usage: {} <coverage filename>".format(os.path.basename(sys.argv[0])))
sys.exit()
# attempt file parse
x = DrcovData(argv[1])
for bb in x.basic_blocks:
print "0x%08x" % bb.start
for bb in x.bbs:
print("0x{:08x}".format(bb.start))
@@ -0,0 +1,42 @@
import os
import collections
from ..coverage_file import CoverageFile
class ModOffData(CoverageFile):
"""
A module+offset log parser.
"""
def __init__(self, filepath):
super(ModOffData, self).__init__(filepath)
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
def get_offsets(self, module_name):
return self.modules.get(module_name, {}).keys()
#--------------------------------------------------------------------------
# Parsing Routines - Top Level
#--------------------------------------------------------------------------
def _parse(self):
"""
Parse modoff coverage from the given log file.
"""
modules = collections.defaultdict(lambda: collections.defaultdict(int))
with open(self.filepath) as f:
for line in f:
trimmed = line.strip()
# skip empty lines
if not len(trimmed): continue
# comments can start with ';' or '#'
if trimmed[0] in [';', '#']: continue
module_name, bb_offset = line.rsplit("+", 1)
modules[module_name][int(bb_offset, 16)] += 1
self.modules = modules
@@ -0,0 +1,82 @@
import collections
from ..coverage_file import CoverageFile
# 'known' instruction pointer labels from Tenet traces
INSTRUCTION_POINTERS = ['EIP', 'RIP', 'PC']
class TenetData(CoverageFile):
"""
A Tenet trace log parser.
"""
def __init__(self, filepath):
self._hitmap = {}
super(TenetData, self).__init__(filepath)
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
def get_addresses(self, module_name=None):
return self._hitmap.keys()
#--------------------------------------------------------------------------
# Parsing Routines - Top Level
#--------------------------------------------------------------------------
def _parse(self):
"""
Parse absolute instruction addresses from the given Tenet trace.
"""
hitmap = collections.defaultdict(int)
with open(self.filepath) as f:
while True:
# read 128mb chunks of 'lines' from the file
lines = f.readlines(1024 * 1024 * 128)
# no more lines to process, break
if not lines:
break
# parse the instruction addresses from lines, into the hitmap
self._process_lines(lines, hitmap)
# save the hitmap if we completed parsing without crashing
self._hitmap = hitmap
def _process_lines(self, lines, hitmap):
"""
Parse instruction addresses out of the given text lines.
"""
for line in lines:
# split the line (an execution delta) into its individual entries
delta = line.split(",")
# process each item (a name=value pair) in the execution delta
for item in delta:
# split name/value pair, and normalize the name for matching
name, value = item.split("=")
name = name.upper()
# ignore entries that are not the instruction pointer
if not name in INSTRUCTION_POINTERS:
continue
# save the parsed instruction pointer address to the hitmap
address = int(value, 16)
hitmap[address] += 1
# break beacuse we don't expect two IP's on the same line
break
# continue to the next line
# ...
# done parsing this chunk of lines
return
@@ -0,0 +1,32 @@
import collections
from ..coverage_file import CoverageFile
class TraceData(CoverageFile):
"""
An instruction (or basic block) address trace log parser.
"""
def __init__(self, filepath):
self._hitmap = {}
super(TraceData, self).__init__(filepath)
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
def get_addresses(self, module_name=None):
return self._hitmap.keys()
#--------------------------------------------------------------------------
# Parsing Routines - Top Level
#--------------------------------------------------------------------------
def _parse(self):
"""
Parse absolute address coverage from the given log file.
"""
hitmap = collections.defaultdict(int)
with open(self.filepath) as f:
for line in f:
hitmap[int(line, 16)] += 1
self._hitmap = hitmap
+4
View File
@@ -0,0 +1,4 @@
from .palette import LighthousePalette
from .coverage_xref import CoverageXref
from .module_selector import ModuleSelector
from .coverage_overview import CoverageOverview
@@ -1,5 +1,9 @@
import logging
import weakref
from lighthouse.util import *
from lighthouse.util.qt import *
from lighthouse.util.disassembler import disassembler
logger = logging.getLogger("Lighthouse.UI.ComboBox")
@@ -8,7 +12,7 @@ logger = logging.getLogger("Lighthouse.UI.ComboBox")
#------------------------------------------------------------------------------
SEPARATOR = "seperator"
SEPARATOR_HEIGHT = 5 # pixels
SEPARATOR_HEIGHT = 1 # pixels
ENTRY_USER = "USER"
ENTRY_SPECIAL = "SPECIAL"
@@ -24,7 +28,7 @@ class CoverageComboBox(QtWidgets.QComboBox):
"""
The Coverage ComboBox UI for switching between loaded coverage.
I had to write an unecessary amount of code to prototype the engaging
I had to write an unnecessary amount of code to prototype the engaging
combobox experiences I was looking for.
But now that we have all the important combobox components subclassed
@@ -39,6 +43,39 @@ class CoverageComboBox(QtWidgets.QComboBox):
# configure the widget for use
self._ui_init()
self.refresh_theme()
#--------------------------------------------------------------------------
# QComboBox Overloads
#--------------------------------------------------------------------------
def mouseReleaseEvent(self, e):
"""
Capture mouse release events on the QComboBox.
"""
# get the widget currently beneath the mouse event being handled
hovering = self.childAt(e.pos())
#
# if the hovered widget is the 'head' of the QComboBox, we assume
# the user has clicked it and should show the dropwdown 'popup'
#
# we must showPopup() ourselves because internal Qt logic for
# 'editable' comboboxes try to enter an editing mode for the field
# rather than expanding the dropdown.
#
# if you don't remember, our combobox is marked 'editable' to satisfy
# some internal Qt logic so that our 'Windows' draw style is used
#
if hovering == self.lineEdit():
self.showPopup()
e.accept()
return
# handle any other events as they normally should be
super(CoverageComboBox, self).mousePressEvent(e)
#--------------------------------------------------------------------------
# Initialization - UI
@@ -51,6 +88,7 @@ class CoverageComboBox(QtWidgets.QComboBox):
# initialize a monospace font to use with our widget(s)
self._font = MonospaceFont()
self._font.setPointSizeF(normalize_to_dpi(10))
self._font_metrics = QtGui.QFontMetricsF(self._font)
self.setFont(self._font)
@@ -58,6 +96,20 @@ class CoverageComboBox(QtWidgets.QComboBox):
self.setModel(CoverageComboBoxModel(self._director, self))
self.setView(CoverageComboBoxView(self.model(), self))
#
# in the interest of maintaining a more consistent cross-platform
# style for the coverage combobox and its dropdown, we use an
# 'editable' QComboBox with the 'Windows' Qt style.
#
# since we don't actually want the QCombobox to be editable, we
# do everything we can to make it readonly / non-interfaceable.
#
self.setEditable(True)
self.lineEdit().setFont(self._font)
self.lineEdit().setReadOnly(True) # text can't be edited
self.lineEdit().setEnabled(False) # text can't be selected
#
# the combobox will pick a size based on its contents when it is first
# made visible, but we also make it is arbitrarily resizable for the
@@ -66,18 +118,10 @@ class CoverageComboBox(QtWidgets.QComboBox):
self.setSizeAdjustPolicy(QtWidgets.QComboBox.AdjustToContentsOnFirstShow)
self.setSizePolicy(QtWidgets.QSizePolicy.Ignored, QtWidgets.QSizePolicy.Ignored)
self.setMaximumHeight(int(self._font_metrics.height()*1.75))
#
# the purpose of this stylesheet is to pad the visible selection text
# in the combobox 'head' on first show. The reason being is that
# without this, the text for the selected coverage will lapse behind
# the combobox dropdown arrow (which is Qt by design???)
#
# I don't like the the tail of the text disappearing behind this silly
# dropdown arrow, therefore we pad the right side of the combobox.
#
self.setStyleSheet("QComboBox { padding: 0 2ex 0 2ex; }")
# draw the QComboBox with a 'Windows'-esque style
self.setStyle(QtWidgets.QStyleFactory.create("Windows"))
# connect relevant signals
self._ui_init_signals()
@@ -119,7 +163,7 @@ class CoverageComboBox(QtWidgets.QComboBox):
# the default combobox signal handlers.
#
# the reason the deletion column clicks can pass through is because
# the model has technically marked their cells as 'unselectable'
# the model has technically marked their cells as 'un-selectable'
# through the flags() overload.
#
@@ -148,18 +192,10 @@ class CoverageComboBox(QtWidgets.QComboBox):
# event, (it looks weird) so clear the table/dropdown highlights now
#
# NOTE/COMPAT
if using_pyqt5:
self.view().selectionModel().setCurrentIndex(
QtCore.QModelIndex(),
QtCore.QItemSelectionModel.ClearAndSelect
)
else:
self.view().selectionModel().setCurrentIndex(
QtCore.QModelIndex(),
QtGui.QItemSelectionModel.ClearAndSelect
)
self.view().selectionModel().setCurrentIndex(
QtCore.QModelIndex(),
QtCore.QItemSelectionModel.ClearAndSelect
)
#
# the deletion of an entry will shift all the entries beneath it up
@@ -193,12 +229,41 @@ class CoverageComboBox(QtWidgets.QComboBox):
"""
self._internal_refresh()
@idafast
@disassembler.execute_ui
def refresh_theme(self):
"""
Refresh UI facing elements to reflect the current theme.
"""
palette = self._director.palette
self.view().refresh_theme()
# configure the combobox's top row / visible dropdown
self.lineEdit().setStyleSheet(
"QLineEdit { "
" border: none;"
" padding: 0 0 0 2ex;"
" margin: 0;"
" background-color: %s;" % palette.combobox_background.name() +
"}"
)
# style the combobox dropdown
self.setStyleSheet(
"QComboBox {"
" color: %s;" % palette.combobox_text.name() +
" border: 1px solid %s;" % palette.combobox_border.name() +
" padding: 0;"
"} "
"QComboBox:hover, QComboBox:focus {"
" border: 1px solid %s;" % palette.combobox_border_focus.name() +
"}"
)
@disassembler.execute_ui
def _internal_refresh(self):
"""
Internal refresh of the coverage combobox.
"""
# refresh the comobobox internals
self.model().refresh()
self.view().refresh()
@@ -215,10 +280,11 @@ class CoverageComboBox(QtWidgets.QComboBox):
Refresh the coverage combobox selection.
"""
# NOTE: we block any index change signals to stop unecessary churn
# NOTE: we block any index change signals to stop unnecessary churn
self.blockSignals(True)
new_index = self.findData(self._director.coverage_name)
self.setCurrentIndex(new_index)
self.lineEdit().home(False)
self.blockSignals(False)
#------------------------------------------------------------------------------
@@ -233,12 +299,108 @@ class CoverageComboBoxView(QtWidgets.QTableView):
def __init__(self, model, parent=None):
super(CoverageComboBoxView, self).__init__(parent)
self.setObjectName(self.__class__.__name__)
self._combobox = weakref.proxy(parent)
self._timer = None
# install the given data model into the table view
self.setModel(model)
# initialize UI elements
self._ui_init()
self.refresh_theme()
#--------------------------------------------------------------------------
# QTableView Overloads
#--------------------------------------------------------------------------
def showEvent(self, e):
"""
Show the QComboBox dropdown/popup.
"""
#
# the next line of code will prevent the combobox 'head' from getting
# any mouse actions now that the popup/dropdown is visible.
#
# this is pretty aggressive, but it will allow the user to 'collapse'
# the combobox dropdown while it is in an expanded state by simply
# clicking the combobox head as one can do to expand it.
#
# the reason this dirty trick is able to simulate a 'collapsing click'
# is because the user clicks 'outside' the popup/dropdown which
# automatically closes it. if the click was on the combobox head, it
# is simply ignored because we set this attribute!
#
# when the popup is closing, we undo this action in hideEvent().
#
# we have to use this workaround because we are using an 'editable' Qt
# combobox which behaves differently to clicks than a normal combobox.
#
# NOTE: we have to do this here in the tableview because the combobox's
# showPopup() and hidePopup() do not always trigger symmetrically.
#
# for example, hidePopup() was not being triggered when focus was lost
# via virutal desktop switch, and other external focus changes. this
# is really bad, because the combobox would get stuck *closed* as it
# was never re-enabled for mouse events
#
self._combobox.setAttribute(QtCore.Qt.WA_TransparentForMouseEvents)
def hideEvent(self, e):
"""
Hide the QComboBox dropdown/popup.
"""
#
# the combobox popup is now hidden / collapsed. the combobox head needs
# to be re-enlightened to direct mouse clicks (eg, to expand it). this
# undos the setAttribute action in showPopup() above.
#
# if the coverage combobox is *not* visible, the coverage window is
# probably being closed / deleted. but just in case, we should attempt
# to restore the combobox's ability to accept clicks before bailing.
#
# this fixes a bug / Qt warning first printed in IDA 7.4 where 'self'
# (the comobobox) would be deleted by the time the 100ms timer in the
# 'normal' case fires below
#
if not self._combobox.isVisible():
self._combobox.setAttribute(QtCore.Qt.WA_TransparentForMouseEvents, False)
return
#
# in the more normal case, the comobobox is simply being collapsed
# by the user clicking it, or clicking away from it.
#
# we use a short timer of 100ms to ensure the 'hiding' of the dropdown
# and its associated click are processed first. aftwards, it is safe to
# begin accepting clicks again.
#
self._timer = QtCore.QTimer.singleShot(100, self.__hidePopup_setattr)
def __hidePopup_setattr(self):
self._combobox.setAttribute(QtCore.Qt.WA_TransparentForMouseEvents, False)
def leaveEvent(self, e):
"""
Overload the mouse leave event.
"""
#
# this code mitigates a bug (feature?) where the last hovered index
# of the table view was retaining its MouseOver flag internally. This
# was keeping my 'X' icons highlighted if the mouse cursor left the
# table while touching one of these cells last.
#
# we basically send a fake 'Hover Event' to the table viewport at an
# invalid position so table clears any remaining hover flags.
#
event = QtGui.QHoverEvent(QtCore.QEvent.HoverLeave, QtCore.QPoint(-1,-1), QtCore.QPoint(-1,-1))
QtWidgets.QApplication.sendEvent(self.viewport(), event)
#--------------------------------------------------------------------------
# Initialization - UI
@@ -251,13 +413,16 @@ class CoverageComboBoxView(QtWidgets.QTableView):
# initialize a monospace font to use with our widget(s)
self._font = MonospaceFont()
self._font.setPointSizeF(normalize_to_dpi(10))
self._font_metrics = QtGui.QFontMetricsF(self._font)
self.setFont(self._font)
# hide dropdown table headers, and default grid
self.horizontalHeader().setVisible(False)
self.verticalHeader().setVisible(False)
self.setShowGrid(False)
#self.resizeRowToContents(True)
# let Qt automatically elide (...) long row text (coverage names)
self.resizeColumnToContents(0)
self.setTextElideMode(QtCore.Qt.ElideRight)
self.setWordWrap(False)
@@ -267,20 +432,12 @@ class CoverageComboBoxView(QtWidgets.QTableView):
hh = self.horizontalHeader()
#
# NOTE/COMPAT:
# - set the coverage name column to be stretchy and as tall as the text
# - make the 'X' icon column fixed width
#
if using_pyqt5:
hh.setSectionResizeMode(0, QtWidgets.QHeaderView.Stretch)
hh.setSectionResizeMode(1, QtWidgets.QHeaderView.Fixed)
vh.setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents)
else:
hh.setResizeMode(0, QtWidgets.QHeaderView.Stretch)
hh.setResizeMode(1, QtWidgets.QHeaderView.Fixed)
vh.setResizeMode(QtWidgets.QHeaderView.ResizeToContents)
hh.setSectionResizeMode(0, QtWidgets.QHeaderView.Stretch)
vh.setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents)
vh.setMinimumSectionSize(0)
# get the column width hint from the model for the 'X' delete column
@@ -291,10 +448,12 @@ class CoverageComboBoxView(QtWidgets.QTableView):
)
# set the 'X' delete icon column width to a fixed size based on the hint
hh.setMinimumSectionSize(icon_column_width)
hh.resizeSection(COLUMN_DELETE, icon_column_width)
hh.setSectionResizeMode(1, QtWidgets.QHeaderView.ResizeToContents)
# install a delegate to do some custom painting against the combobox
self.setItemDelegate(ComboBoxDelegate())
self.setItemDelegate(ComboBoxDelegate(self))
#--------------------------------------------------------------------------
# Refresh
@@ -311,10 +470,10 @@ class CoverageComboBoxView(QtWidgets.QTableView):
#
# if this row is not a user defined entry, we want to merge ('span')
# its cells so there is no no 'X' delete button column shown for it.
# its cells so there is no 'X' delete button column shown for it.
#
# this should apply to special rows such as the 'Hot Shell',
# 'Aggregate', or the 'seperator' indexes
# 'Aggregate', or the 'separator' indexes
#
if not model.data(model.index(row, 1), QtCore.Qt.DecorationRole):
@@ -324,27 +483,27 @@ class CoverageComboBoxView(QtWidgets.QTableView):
else:
self.setSpan(row, 0, 0, model.columnCount())
#--------------------------------------------------------------------------
# QTableView Overloads
#--------------------------------------------------------------------------
def leaveEvent(self, e):
@disassembler.execute_ui
def refresh_theme(self):
"""
Overload the mouse leave event.
Refresh UI facing elements to reflect the current theme.
"""
#
# this code mitigates a bug (feature?) where the last hovered index
# of the table view was retaining its MouseOver flag internally. This
# was keeping my 'X' icons higlighted if the mouse cursor left the
# table while touching one of these cells last.
#
# we basically send a fake 'Hover Event' to the table viewport at an
# invalid position so table clears any remaining hover flags.
#
event = QtGui.QHoverEvent(QtCore.QEvent.HoverLeave, QtCore.QPoint(-1,-1), QtCore.QPoint(-1,-1))
QtWidgets.QApplication.sendEvent(self.viewport(), event)
palette = self.model()._director.palette
self.setStyleSheet(
"QTableView {"
" background-color: %s;" % palette.combobox_background.name() +
" color: %s;" % palette.combobox_text.name() +
" margin: 0; outline: none;"
" border: 1px solid %s; " % palette.shell_border.name() +
"} "
"QTableView::item { " +
" padding: 0.5ex; border: 0; "
"} "
"QTableView::item:focus { " +
" background-color: %s; " % palette.combobox_selection_background.name() +
" color: %s; " % palette.combobox_selection_text.name() +
"} "
)
#------------------------------------------------------------------------------
# Coverage ComboBox - TableModel
@@ -366,13 +525,14 @@ class CoverageComboBoxModel(QtCore.QAbstractTableModel):
# initialize a monospace font to use with our widget(s)
self._font = MonospaceFont()
self._font.setPointSizeF(normalize_to_dpi(10))
self._font_metrics = QtGui.QFontMetricsF(self._font)
# load the raw 'X' delete icon from disk
delete_icon = QtGui.QPixmap(plugin_resource("icons/delete_coverage.png"))
# compute the appropriate size for the deletion icon
icon_height = self._font_metrics.height()/2
icon_height = int(self._font_metrics.height()*0.75)
icon_width = icon_height
# scale the icon as appropriate (very likely scaling it down)
@@ -421,7 +581,7 @@ class CoverageComboBoxModel(QtCore.QAbstractTableModel):
# row height size hint request
if role == QtCore.Qt.SizeHintRole:
# the seperator 'row' has a special, 'thinner' row size
# the separator 'row' has a special, 'thinner' row size
if section == self._seperator_index:
return SEPARATOR_HEIGHT
@@ -448,7 +608,7 @@ class CoverageComboBoxModel(QtCore.QAbstractTableModel):
if section == COLUMN_DELETE:
return self._delete_icon.size().width() * 2
# unhandeled request, nothing to do
# unhandled request, nothing to do
return None
def data(self, index, role=QtCore.Qt.DisplayRole):
@@ -470,11 +630,24 @@ class CoverageComboBoxModel(QtCore.QAbstractTableModel):
elif role == QtCore.Qt.TextAlignmentRole:
return QtCore.Qt.AlignVCenter | QtCore.Qt.AlignLeft
# combobox header, padded with " " to account for dropdown arrow overlap
elif role == QtCore.Qt.EditRole:
if index.column() == COLUMN_COVERAGE_STRING and index.row() != self._seperator_index:
return self._director.get_coverage_string(self._entries[index.row()]) + " "
# data display request
elif role == QtCore.Qt.DisplayRole:
if index.column() == COLUMN_COVERAGE_STRING and index.row() != self._seperator_index:
return self._director.get_coverage_string(self._entries[index.row()])
# tooltip
elif role == QtCore.Qt.ToolTipRole:
if index.column() == COLUMN_COVERAGE_STRING and index.row() != self._seperator_index:
coverage = self._director.get_coverage(self._entries[index.row()])
return coverage.filepath if coverage.filepath else ""
elif index.column() == COLUMN_DELETE:
return "Delete loaded coverage"
# icon display request
elif role == QtCore.Qt.DecorationRole:
@@ -482,7 +655,7 @@ class CoverageComboBoxModel(QtCore.QAbstractTableModel):
if index.column() == COLUMN_DELETE:
#
# if the coverage entry is below the seperator, it is a user
# if the coverage entry is below the separator, it is a user
# loaded coverage and should always be deletable
#
@@ -501,7 +674,7 @@ class CoverageComboBoxModel(QtCore.QAbstractTableModel):
elif role == QtCore.Qt.AccessibleDescriptionRole:
#
# if the entry is ABOVE the seperator index, it's a 'special'
# if the entry is ABOVE the separator index, it's a 'special'
# entry, eg 'Hot Shell', 'New Composition', 'Aggregate'
#
@@ -509,7 +682,7 @@ class CoverageComboBoxModel(QtCore.QAbstractTableModel):
return ENTRY_SPECIAL
#
# the entry IS the seperator index
# the entry IS the separator index
#
elif index.row() == self._seperator_index:
@@ -539,7 +712,7 @@ class CoverageComboBoxModel(QtCore.QAbstractTableModel):
if index.column() == COLUMN_DELETE:
return QtCore.Qt.ItemIsEnabled
# the seperator should not be interactive in *any* way
# the separator should not be interactive in *any* way
if index.row() == self._seperator_index:
return QtCore.Qt.NoItemFlags
@@ -586,12 +759,11 @@ class ComboBoxDelegate(QtWidgets.QStyledItemDelegate):
dropdown table in the Coverage ComboBox a bit more to our liking.
"""
def __init__(self, parent=None):
def __init__(self, parent):
super(ComboBoxDelegate, self).__init__(parent)
# painting property definitions
self._grid_color = QtGui.QColor(0x505050)
self._separator_color = QtGui.QColor(0x909090)
self._grid_color = parent.model()._director.palette.shell_border
def sizeHint(self, option, index):
"""
@@ -610,19 +782,29 @@ class ComboBoxDelegate(QtWidgets.QStyledItemDelegate):
if index.data(QtCore.Qt.AccessibleDescriptionRole) == ENTRY_USER:
painter.save()
painter.setPen(self._grid_color)
painter.drawLine(option.rect.bottomLeft(), option.rect.bottomRight())
painter.restore()
final_entry = (index.sibling(index.row()+1, 0).row() == -1)
# custom paint the seperator entry
if index.data(QtCore.Qt.AccessibleDescriptionRole) == SEPARATOR:
painter.save()
painter.setPen(self._separator_color)
painter.drawLine(option.rect.left(), option.rect.center().y(),
option.rect.right(), option.rect.center().y())
painter.restore()
# draw the grid line beneath the current row (a coverage entry)
tweak = QtCore.QPoint(0, 1) # 1px tweak provides better spacing
if not final_entry:
painter.drawLine(
option.rect.bottomLeft() + tweak,
option.rect.bottomRight() + tweak
)
# nothing else to paint for the seperator entry
return
#
# now we will re-draw the grid line *above* the current entry,
# fixing a minor graphical bug where grid lines could disappear
# after hovering over a row / entry
#
previous = index.sibling(index.row()-1, 0)
painter.drawLine(
option.rect.topLeft(),
option.rect.topRight()
)
painter.restore()
# custom paint the 'X' icon where applicable
if index.data(QtCore.Qt.DecorationRole):
@@ -634,7 +816,7 @@ class ComboBoxDelegate(QtWidgets.QStyledItemDelegate):
destination_rect = pixmap.rect()
destination_rect.moveCenter(option.rect.center())
# augment the icon pixmap to be greyed out (disabled) or colored
# augment the icon pixmap to be grayed out (disabled) or colored
# based on the mouse hover status of this index
if not (option.state & QtWidgets.QStyle.State_MouseOver):
pixmap = QtWidgets.QApplication.style().generatedIconPixmap(
@@ -645,9 +827,20 @@ class ComboBoxDelegate(QtWidgets.QStyledItemDelegate):
# draw the icon to the column
painter.drawPixmap(destination_rect, pixmap)
return
# nothing else to paint for the icon column entry
# custom paint the separator entry between special & normal coverage
if index.data(QtCore.Qt.AccessibleDescriptionRole) == SEPARATOR:
painter.save()
painter.setPen(self._grid_color)
painter.drawRect(
option.rect
)
painter.restore()
# nothing else to paint for the separator entry
return
# pass through to the standard painting
super(ComboBoxDelegate, self).paint(painter, option, index)
+387
View File
@@ -0,0 +1,387 @@
import os
import logging
import weakref
from lighthouse.util.qt import *
from lighthouse.util.misc import plugin_resource
from lighthouse.util.disassembler import disassembler
from lighthouse.composer import ComposingShell
from lighthouse.ui.coverage_table import CoverageTableView, CoverageTableModel, CoverageTableController
from lighthouse.ui.coverage_combobox import CoverageComboBox
from lighthouse.ui.coverage_settings import TableSettingsMenu
logger = logging.getLogger("Lighthouse.UI.Overview")
#------------------------------------------------------------------------------
# Coverage Overview
#------------------------------------------------------------------------------
class CoverageOverview(object):
"""
The Coverage Overview Widget.
"""
def __init__(self, lctx, widget):
self.lctx = lctx
self.widget = widget
self.director = self.lctx.director
self.lctx.coverage_overview = self
self.initialized = False
self._refreshed_callback = None
# see the EventProxy class below for more details
self._events = EventProxy(self)
self.widget.installEventFilter(self._events)
# plugin_resource(os.path.join("icons", "overview.png"))
# initialize the plugin UI
self._ui_init()
# refresh the data UI such that it reflects the most recent data
self.refresh()
# register for cues from the director
self._refreshed_callback = self.director.refreshed(self.refresh)
#--------------------------------------------------------------------------
# Pseudo Widget Functions
#--------------------------------------------------------------------------
@property
def name(self):
if not self.widget:
return "Coverage Overview"
return self.widget.name
@property
def visible(self):
if not self.widget:
return False
return self.widget.visible
def terminate(self):
"""
The CoverageOverview is being hidden / deleted.
"""
if self.widget is None:
return
if self._refreshed_callback:
self.director.unregister_refreshed(self._refreshed_callback)
self._refreshed_callback = None
if self.lctx.coverage_overview is self:
self.lctx.coverage_overview = None
self.initialized = False
self._combobox = None
self._shell = None
self._toolbar = None
self._table_view = None
self._table_controller = None
self._table_model = None
self._settings_button = None
self._settings_menu = None
self._shell_elements = None
self._events = None
self.widget = None
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init(self):
"""
Initialize UI elements.
"""
# initialize our ui elements
self._ui_init_table()
self._ui_init_toolbar()
self._ui_init_signals()
# layout the populated ui just before showing it
self._ui_layout()
def _ui_init_table(self):
"""
Initialize the coverage table.
"""
self._table_model = CoverageTableModel(self.lctx, self.widget)
self._table_controller = CoverageTableController(self.lctx, self._table_model)
self._table_view = CoverageTableView(self._table_controller, self._table_model, self.widget)
def _ui_init_toolbar(self):
"""
Initialize the coverage toolbar.
"""
# initialize child elements to go on the toolbar
self._ui_init_toolbar_elements()
self._ui_init_settings()
#
# create the 'toolbar', and customize its style. specifically, we are
# interested in tweaking the separator and padding between elements.
#
self._toolbar = QtWidgets.QToolBar()
self._toolbar.setStyle(QtWidgets.QStyleFactory.create("Windows"))
self._toolbar.setStyleSheet('QToolBar{padding:0;margin:0;}')
# populate the toolbar with all our subordinates
self._toolbar.addWidget(self._shell_elements)
self._toolbar.addWidget(self._settings_button)
def _ui_init_toolbar_elements(self):
"""
Initialize the coverage toolbar UI elements.
"""
# the composing shell
self._shell = ComposingShell(
self.lctx,
weakref.proxy(self._table_model),
weakref.proxy(self._table_view)
)
# the coverage combobox
self._combobox = CoverageComboBox(self.director)
# the splitter to make the shell / combobox resizable
self._shell_elements = QtWidgets.QSplitter(QtCore.Qt.Horizontal)
self._shell_elements.setStyleSheet(
"""
QSplitter
{
border: none;
}
QSplitter::handle
{
background-color: #909090;
width: 2px;
height: 2px;
margin: 0 0.5em 0 0.5em
}
QSplitter::handle:horizontal:hover
{
background-color: #3399FF;
}
""")
# add the child items we wish to put the 'splitter' between
# [ composing shell ] [SPLITTER] [ combobox ]
self._shell_elements.addWidget(self._shell)
self._shell_elements.addWidget(self._combobox)
# make the splitter responsive (animate) when hovered
self._shell_elements.handle(1).setAttribute(QtCore.Qt.WA_Hover)
# give the shell expansion preference over the combobox
self._shell_elements.setStretchFactor(0, 1)
def _ui_init_settings(self):
"""
Initialize the overview settings popup.
"""
# settings button
self._settings_button = QtWidgets.QToolButton()
self._settings_button.setIcon(get_qt_icon("SP_DialogResetButton"))
self._settings_button.setStyleSheet("QToolButton::menu-indicator{image: none;}")
# settings menu
self._settings_menu = TableSettingsMenu(self.widget)
def _ui_init_signals(self):
"""
Connect UI signals.
"""
self._settings_menu.connect_signals(self._table_controller, self.lctx)
self._settings_button.clicked.connect(self._ui_show_settings)
def _ui_layout(self):
"""
Layout the major UI elements of the widget.
"""
# layout the major elements of our widget
layout = QtWidgets.QGridLayout()
layout.setSpacing(int(get_dpi_scale()*5))
layout.addWidget(self._table_view)
layout.addWidget(self._toolbar)
# apply the layout to the containing form
self.widget.setLayout(layout)
#--------------------------------------------------------------------------
# Signal Handlers
#--------------------------------------------------------------------------
def _ui_show_settings(self):
"""
Handle a click of the settings button.
"""
delta = QtCore.QPoint(
-1*self._settings_menu.sizeHint().width(),
-1*self._settings_menu.sizeHint().height()
)
center = QtCore.QPoint(
int(self._settings_button.sizeHint().width()/2),
int(self._settings_button.sizeHint().height()/2)
)
where = self._settings_button.mapToGlobal(center+delta)
self._settings_menu.popup(where)
#--------------------------------------------------------------------------
# Refresh
#--------------------------------------------------------------------------
@disassembler.execute_ui
def refresh(self):
"""
Refresh the Coverage Overview.
"""
if not (self._table_model and self._shell and self._combobox):
return
self._table_model.refresh()
self._shell.refresh()
self._combobox.refresh()
@disassembler.execute_ui
def refresh_theme(self):
"""
Update visual elements based on theme change.
"""
if not (self._table_view and self._table_model and self._shell and self._combobox):
return
self._table_view.refresh_theme()
self._table_model.refresh_theme()
self._shell.refresh_theme()
self._combobox.refresh_theme()
#------------------------------------------------------------------------------
# Qt Event Filter
#------------------------------------------------------------------------------
debugger_docked = False
class EventProxy(QtCore.QObject):
#
# NOTE/COMPAT: QtCore.QEvent.Destroy not in IDA7? Just gonna ship our own...
# - https://doc.qt.io/qt-5/qevent.html#Type-enum
#
EventShow = 17
EventDestroy = 16
EventLayoutRequest = 76
EventUpdateLater = 78
def __init__(self, target):
super(EventProxy, self).__init__()
self._target = weakref.proxy(target)
self._first_hit = True
def eventFilter(self, source, event):
#
# hook the destroy event of the coverage overview widget so that we can
# cleanup after ourselves in the interest of stability
#
if int(event.type()) == self.EventDestroy:
source.removeEventFilter(self)
#
# XXX/V35: This is pretty hacky annoying stuff, but the lifetime
# of the CoverageOverview widget is managed internally by binja
# and gets deleted/cleaned up *after* a database is closed.
#
# it's best we just unload the lighthouse context in binja after
# the UI widgets have been destroyed (which aligns with IDA)
#
if disassembler.NAME == "BINJA":
lctx = self._target.lctx
core = lctx.core
core.binja_close_context(lctx.dctx)
# cleanup the UI / qt references for the CoverageOverview elements
self._target.terminate()
#
# this seems to be 'roughly' the last event triggered after the widget
# is done initializing in both IDA and Binja, but prior to the first
# user-triggered 'show' events.
#
# this is mostly to account for the fact that binja 'shows' the widget
# when it is initially created (outside of our control). this was
# causing lighthouse to automatically cache database metadata when
# every database was opened ...
#
elif int(event.type()) == self.EventLayoutRequest:
self._target.initialized = True
#
# this is used to hook a little bit after the 'show' event of the
# coverage overview. this is the most universal signal that the
# user is *actually* trying to use lighthouse in a meaningful way...
#
# we will use this moment first to check if they skipped straight to
# 'go' and opened the coverage overview without the metadata cache
# getting built.
#
# this case should only happen if the user does 'Show Coverage
# Overview' from the binja-controlled Window menu entry...
#
elif int(event.type()) == self.EventUpdateLater:
if self._target.visible and self._first_hit:
self._first_hit = False
if disassembler.NAME == "BINJA":
self._target.lctx.start()
if not self._target.director.metadata.cached:
self._target.director.refresh()
#
# this is an unknown event, but it seems to fire when the widget is
# being saved/restored by a QMainWidget (in IDA). we use this to try
# and ensure the Coverage Overview stays docked when flipping between
# Reversing and Debugging states in IDA.
#
# See issue #16 on github for more information.
#
elif int(event.type()) == 2002 and disassembler.NAME == "IDA":
import idaapi
#
# if the general registers IDA View exists, we make the assumption
# that the user has probably started debugging.
#
debug_mode = bool(idaapi.find_widget("General registers"))
#
# if this is the first time the user has started debugging, dock
# the coverage overview in the debug QMainWidget workspace. its
# dock status / position should persist future debugger launches.
#
global debugger_docked
if debug_mode and not debugger_docked:
idaapi.set_dock_pos(self._target._title, "Structures", idaapi.DP_TAB)
debugger_docked = True
return False
+101
View File
@@ -0,0 +1,101 @@
import logging
from lighthouse.util.qt import *
from lighthouse.util.disassembler import disassembler
logger = logging.getLogger("Lighthouse.UI.Settings")
class TableSettingsMenu(QtWidgets.QMenu):
"""
A quick-access settings menu for Lighthouse.
"""
def __init__(self, parent=None):
super(TableSettingsMenu, self).__init__(parent)
self._visible_action = None
self._ui_init_actions()
self.setToolTipsVisible(True)
#--------------------------------------------------------------------------
# QMenu Overloads
#--------------------------------------------------------------------------
def event(self, event):
"""
Hook the QMenu event stream.
"""
action = self.activeAction()
# swallow clicks to checkbox/radiobutton actions to keep qmenu open
if event.type() == QtCore.QEvent.MouseButtonRelease:
if action and action.isEnabled() and action.isCheckable():
action.trigger()
event.accept()
return True
# handle any other events as wee normally should
return super(TableSettingsMenu, self).event(event)
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init_actions(self):
"""
Initialize the menu actions.
"""
# lighthouse colors
self._action_change_theme = QtWidgets.QAction("Change theme", None)
self._action_change_theme.setToolTip("Lighthouse color & theme customization")
self.addAction(self._action_change_theme)
self.addSeparator()
# painting
self._action_force_clear = QtWidgets.QAction("Force clear paint (slow!)", None)
self._action_force_clear.setToolTip("Attempt to forcefully clear stuck paint from the database")
self.addAction(self._action_force_clear)
self._action_disable_paint = QtWidgets.QAction("Disable painting", None)
self._action_disable_paint.setCheckable(True)
self._action_disable_paint.setToolTip("Disable the coverage painting subsystem")
self.addAction(self._action_disable_paint)
self.addSeparator()
# table actions
self._action_refresh_metadata = QtWidgets.QAction("Rebuild coverage mappings", None)
self._action_refresh_metadata.setToolTip("Refresh the database metadata and coverage mapping")
self.addAction(self._action_refresh_metadata)
self._action_export_html = QtWidgets.QAction("Generate HTML report", None)
self._action_export_html.setToolTip("Export the coverage table to HTML")
self.addAction(self._action_export_html)
self._action_hide_zero = QtWidgets.QAction("Hide 0% coverage", None)
self._action_hide_zero.setToolTip("Hide table entries with no coverage data")
self._action_hide_zero.setCheckable(True)
self.addAction(self._action_hide_zero)
def connect_signals(self, controller, lctx):
"""
Connect UI signals.
"""
self._action_change_theme.triggered.connect(lctx.core.palette.interactive_change_theme)
self._action_refresh_metadata.triggered.connect(lctx.director.refresh)
self._action_hide_zero.triggered[bool].connect(controller._model.filter_zero_coverage)
self._action_disable_paint.triggered[bool].connect(lambda x: lctx.painter.set_enabled(not x))
self._action_force_clear.triggered.connect(lctx.painter.force_clear)
self._action_export_html.triggered.connect(controller.export_to_html)
lctx.painter.status_changed(self._ui_painter_changed_status)
#--------------------------------------------------------------------------
# Signal Handlers
#--------------------------------------------------------------------------
@disassembler.execute_ui
def _ui_painter_changed_status(self, painter_enabled):
"""
Handle an event from the painter being enabled/disabled.
"""
self._action_disable_paint.setChecked(not painter_enabled)
File diff suppressed because it is too large Load Diff
+176
View File
@@ -0,0 +1,176 @@
import os
import logging
from lighthouse.util import lmsg
from lighthouse.util.qt import *
from lighthouse.util.misc import human_timestamp
from lighthouse.util.python import *
logger = logging.getLogger("Lighthouse.UI.Xref")
#------------------------------------------------------------------------------
# Coverage Xref Dialog
#------------------------------------------------------------------------------
class CoverageXref(QtWidgets.QDialog):
"""
A Qt Dialog to list other coverage sets that contain a given address.
This class makes up a rudimentary xref dialog. It does not follow Qt
'best practices' because it does not need to be super flashy, nor does
it demand much facetime.
"""
def __init__(self, director, address):
super(CoverageXref, self).__init__()
self._director = director
# dialog attributes
self.address = address
self.selected_coverage = None
self.selected_filepath = None
# configure the widget for use
self._ui_init()
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init(self):
"""
Initialize UI elements.
"""
self.setWindowTitle("Coverage Xrefs to 0x%X" % self.address)
self.setWindowFlags(self.windowFlags() & ~QtCore.Qt.WindowContextHelpButtonHint)
self.setModal(True)
self._font = self.font()
self._font.setPointSizeF(normalize_to_dpi(10))
self._font_metrics = QtGui.QFontMetricsF(self._font)
# initialize coverage xref table
self._ui_init_table()
self._populate_table()
# layout the populated UI just before showing it
self._ui_layout()
def _ui_init_table(self):
"""
Initialize the coverage xref table UI elements.
"""
self._table = QtWidgets.QTableWidget()
self._table.verticalHeader().setVisible(False)
self._table.setHorizontalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel)
self._table.horizontalHeader().setFont(self._font)
self._table.setFont(self._font)
self._table.setWordWrap(False)
# symbol, cov %, name, time
self._table.setColumnCount(4)
self._table.setHorizontalHeaderLabels(["Sym", "Cov %", "Coverage Name", "Timestamp"])
self._table.setColumnWidth(0, 45)
self._table.setColumnWidth(1, 55)
self._table.setColumnWidth(2, 400)
self._table.setColumnWidth(3, 100)
# left align text in column headers
for i in range(4):
self._table.horizontalHeaderItem(i).setTextAlignment(QtCore.Qt.AlignLeft)
# disable bolding of column headers when selected
self._table.horizontalHeader().setHighlightSections(False)
# stretch the filename field, as it is the most important
self._table.horizontalHeader().setSectionResizeMode(2, QtWidgets.QHeaderView.Stretch)
# make table read only, select a full row by default
self._table.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers)
self._table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
# catch double click events on table rows
self._table.cellDoubleClicked.connect(self._ui_cell_double_click)
def _populate_table(self):
"""
Populate the xref table with data from the coverage director.
"""
cov_xrefs = self._director.get_address_coverage(self.address)
file_xrefs = self._director.get_address_file(self.address)
# dedupe
for coverage in cov_xrefs:
if coverage.filepath in file_xrefs:
file_xrefs.remove(coverage.filepath)
# populate table with coverage details
self._table.setSortingEnabled(False)
self._table.setRowCount(len(cov_xrefs) + len(file_xrefs))
# coverage objects
for i, coverage in enumerate(cov_xrefs, 0):
self._table.setItem(i, 0, QtWidgets.QTableWidgetItem(self._director.get_shorthand(coverage.name)))
self._table.setItem(i, 1, QtWidgets.QTableWidgetItem("%5.2f" % (coverage.instruction_percent*100)))
name_entry = QtWidgets.QTableWidgetItem(coverage.name)
name_entry.setToolTip(coverage.filepath)
self._table.setItem(i, 2, name_entry)
date_entry = QtWidgets.QTableWidgetItem()
date_entry.setData(QtCore.Qt.DisplayRole, QtCore.QDateTime.fromMSecsSinceEpoch(int(coverage.timestamp*1000)))
self._table.setItem(i, 3, QtWidgets.QTableWidgetItem(date_entry))
# filepaths
for i, filepath in enumerate(file_xrefs, len(cov_xrefs)):
# try to read timestamp of the file on disk (if it exists)
try:
timestamp = os.path.getmtime(filepath)
except (OSError, TypeError):
timestamp = 0
# populate table entry
self._table.setItem(i, 0, QtWidgets.QTableWidgetItem("-"))
self._table.setItem(i, 1, QtWidgets.QTableWidgetItem("-"))
name_entry = QtWidgets.QTableWidgetItem(os.path.basename(filepath))
name_entry.setToolTip(filepath)
self._table.setItem(i, 2, name_entry)
date_entry = QtWidgets.QTableWidgetItem()
date_entry.setData(QtCore.Qt.DisplayRole, QtCore.QDateTime.fromMSecsSinceEpoch(int(timestamp*1000)))
self._table.setItem(i, 3, date_entry)
self._table.resizeColumnsToContents()
self._table.resizeRowsToContents()
self._table.setSortingEnabled(True)
def _ui_layout(self):
"""
Layout the major UI elements of the widget.
"""
layout = QtWidgets.QVBoxLayout()
# layout child widgets
layout.addWidget(self._table)
# scale widget dimensions based on DPI
height = int(get_dpi_scale() * 250)
width = int(get_dpi_scale() * 600)
self.setMinimumHeight(height)
self.setMinimumWidth(width)
# apply the widget layout
self.setLayout(layout)
#--------------------------------------------------------------------------
# Signal Handlers
#--------------------------------------------------------------------------
def _ui_cell_double_click(self, row, column):
"""
A cell/row has been double clicked in the xref table.
"""
if self._table.item(row, 0).text() == "-":
self.selected_filepath = self._table.item(row, 2).toolTip()
else:
self.selected_coverage = self._table.item(row, 2).text()
self.accept()
+168
View File
@@ -0,0 +1,168 @@
import os
import logging
from lighthouse.util import lmsg
from lighthouse.util.qt import *
from lighthouse.util.misc import human_timestamp
from lighthouse.util.python import *
logger = logging.getLogger("Lighthouse.UI.ModuleSelector")
#------------------------------------------------------------------------------
# Coverage Xref Dialog
#------------------------------------------------------------------------------
class ModuleSelector(QtWidgets.QDialog):
"""
A Qt Dialog to list all the coverage modules in a coverage file.
This class makes up a rudimentary selector dialog. It does not follow Qt
'best practices' because it does not need to be super flashy, nor does
it demand much facetime.
"""
def __init__(self, target_name, module_names, coverage_file):
super(ModuleSelector, self).__init__()
self._target_name = target_name
self._module_names = module_names
self._coverage_file = os.path.basename(coverage_file)
# dialog attributes
self.selected_name = None
# configure the widget for use
self._ui_init()
@property
def remember_alias(self):
return self._checkbox_remember.isChecked()
@property
def ignore_missing(self):
return self._checkbox_ignore_missing.isChecked()
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init(self):
"""
Initialize UI elements.
"""
self.setWindowTitle("Select module matching this database")
self.setWindowFlags(self.windowFlags() & ~QtCore.Qt.WindowContextHelpButtonHint)
self.setModal(True)
self._font = self.font()
self._font.setPointSizeF(normalize_to_dpi(10))
self._font_metrics = QtGui.QFontMetricsF(self._font)
# initialize module selector table
self._ui_init_header()
self._ui_init_table()
self._populate_table()
# layout the populated UI just before showing it
self._ui_layout()
def _ui_init_header(self):
"""
Initialize the module selector header UI elements.
"""
description_text = \
"Lighthouse could not automatically identify the target module in the given coverage file:<br />" \
"<br />" \
"-- <b>Target:</b> {0}<br />" \
"-- <b>Coverage File:</b> {1}<br />" \
"<br />" \
"Please double click the name of the module that matches this database, or close this dialog<br />" \
"if you do not see your binary listed in the table below...".format(self._target_name, self._coverage_file)
self._label_description = QtWidgets.QLabel(description_text)
self._label_description.setTextFormat(QtCore.Qt.RichText)
self._label_description.setFont(self._font)
#self._label_description.setWordWrap(True)
# a checkbox to save the user selected alias to the database
self._checkbox_remember = QtWidgets.QCheckBox("Remember target module alias for this session")
self._checkbox_remember.setFont(self._font)
# a checkbox to ignore future 'missing coverage' / select module warnings
self._checkbox_ignore_missing = QtWidgets.QCheckBox("Suppress this dialog for the remaining coverage files")
self._checkbox_ignore_missing.setFont(self._font)
def _ui_init_table(self):
"""
Initialize the module selector table UI elements.
"""
self._table = QtWidgets.QTableWidget()
self._table.verticalHeader().setVisible(False)
self._table.setHorizontalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel)
self._table.horizontalHeader().setFont(self._font)
self._table.setFont(self._font)
# Create a simple table / list
self._table.setColumnCount(1)
self._table.setHorizontalHeaderLabels(["Module Name"])
# left align text in column headers
self._table.horizontalHeaderItem(0).setTextAlignment(QtCore.Qt.AlignLeft)
# disable bolding of column headers when selected
self._table.horizontalHeader().setHighlightSections(False)
# stretch the last column of the table (aesthetics)
self._table.horizontalHeader().setStretchLastSection(True)
# make table read only, select a full row by default
self._table.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers)
self._table.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows)
# catch double click events on table rows
self._table.cellDoubleClicked.connect(self._ui_cell_double_click)
def _populate_table(self):
"""
Populate the module table with the module names provided to this dialog.
"""
self._table.setSortingEnabled(False)
self._table.setRowCount(len(self._module_names))
for i, module_name in enumerate(self._module_names, 0):
self._table.setItem(i, 0, QtWidgets.QTableWidgetItem(module_name))
self._table.resizeRowsToContents()
self._table.setSortingEnabled(True)
def _ui_layout(self):
"""
Layout the major UI elements of the widget.
"""
layout = QtWidgets.QVBoxLayout()
#layout.setContentsMargins(0,0,0,0)
# layout child widgets
layout.addWidget(self._label_description)
layout.addWidget(self._table)
layout.addWidget(self._checkbox_remember)
layout.addWidget(self._checkbox_ignore_missing)
# scale widget dimensions based on DPI
height = int(get_dpi_scale() * 250)
width = int(get_dpi_scale() * 400)
self.setMinimumHeight(height)
self.setMinimumWidth(width)
# apply the widget layout
self.setLayout(layout)
#--------------------------------------------------------------------------
# Signal Handlers
#--------------------------------------------------------------------------
def _ui_cell_double_click(self, row, column):
"""
A cell/row has been double clicked in the module table.
"""
self.selected_name = self._table.item(row, 0).text()
self.accept()
+573
View File
@@ -0,0 +1,573 @@
import os
import json
import glob
import shutil
import logging
# NOTE: Py2/Py3 compat
try:
from json.decoder import JSONDecodeError
except ImportError:
JSONDecodeError = ValueError
from lighthouse.util.qt import *
from lighthouse.util.log import lmsg
from lighthouse.util.misc import *
from lighthouse.util.disassembler import disassembler
logger = logging.getLogger("Lighthouse.UI.Palette")
#------------------------------------------------------------------------------
# Plugin Color Palette
#------------------------------------------------------------------------------
class LighthousePalette(object):
"""
Color Palette for the Lighthouse plugin.
"""
def __init__(self):
"""
Initialize default palette colors for Lighthouse.
"""
self._initialized = False
self._last_directory = None
self._required_fields = []
# hints about the user theme (light/dark)
self._user_qt_hint = "dark"
self._user_disassembly_hint = "dark"
self.theme = None
self._default_themes = \
{
"dark": "synth.json",
"light": "dullien.json"
}
# list of objects requesting a callback after a theme change
self._theme_changed_callbacks = []
# get a list of required theme fields, for user theme validation
self._load_required_fields()
# initialize the user theme directory
self._populate_user_theme_dir()
# load a placeholder theme (unhinted) for inital Lighthoue bring-up
self._load_preferred_theme(True)
self._initialized = False
@staticmethod
def get_plugin_theme_dir():
"""
Return the Lighthouse plugin theme directory.
"""
return plugin_resource("themes")
@staticmethod
def get_user_theme_dir():
"""
Return the Lighthouse user theme directory.
"""
theme_directory = os.path.join(
disassembler.get_disassembler_user_directory(),
"lighthouse_themes"
)
return theme_directory
#----------------------------------------------------------------------
# Properties
#----------------------------------------------------------------------
@property
def TOKEN_COLORS(self):
"""
Return the palette of token colors.
"""
return \
{
# logic operators
"OR": self.logic_token,
"XOR": self.logic_token,
"AND": self.logic_token,
"MINUS": self.logic_token,
# misc
"COMMA": self.comma_token,
"LPAREN": self.paren_token,
"RPAREN": self.paren_token,
#"WS": self.whitepsace_token,
#"UNKNOWN": self.unknown_token,
# coverage
"COVERAGE_TOKEN": self.coverage_token,
}
#----------------------------------------------------------------------
# Callbacks
#----------------------------------------------------------------------
def theme_changed(self, callback):
"""
Subscribe a callback for theme change events.
"""
register_callback(self._theme_changed_callbacks, callback)
def _notify_theme_changed(self):
"""
Notify listeners of a theme change event.
"""
notify_callback(self._theme_changed_callbacks)
#----------------------------------------------------------------------
# Public
#----------------------------------------------------------------------
def warmup(self):
"""
Warms up the theming system prior to initial use.
"""
if self._initialized:
return
logger.debug("Warming up theme subsystem...")
#
# attempt to load the user's preferred (or hinted) theme. if we are
# successful, then there's nothing else to do!
#
self._refresh_theme_hints()
if self._load_preferred_theme():
self._initialized = True
logger.debug(" - warmup complete, using preferred theme!")
return
#
# failed to load the preferred theme... so delete the 'active'
# file (if there is one) and warn the user before falling back
#
try:
os.remove(os.path.join(self.get_user_theme_dir(), ".active_theme"))
except:
pass
disassembler.warning(
"Failed to load Lighthouse user theme!\n\n"
"Please check the console for more information..."
)
#
# if no theme is loaded, we will attempt to detect & load the in-box
# themes based on the user's disassembler theme
#
loaded = self._load_preferred_theme(fallback=True)
if not loaded:
lmsg("Could not load Lighthouse fallback theme!") # this is a bad place to be...
return
logger.debug(" - warmup complete, using hint-recommended theme!")
self._initialized = True
def interactive_change_theme(self):
"""
Open a file dialog and let the user select a new Lighthoue theme.
"""
# create & configure a Qt File Dialog for immediate use
file_dialog = QtWidgets.QFileDialog(
None,
"Open Lighthouse theme file",
self._last_directory,
"JSON Files (*.json)"
)
file_dialog.setFileMode(QtWidgets.QFileDialog.ExistingFile)
# prompt the user with the file dialog, and await filename(s)
filename, _ = file_dialog.getOpenFileName()
if not filename:
return
#
# ensure the user is only trying to load themes from the user theme
# directory as it helps ensure some of our intenal loading logic
#
file_dir = os.path.abspath(os.path.dirname(filename))
user_dir = os.path.abspath(self.get_user_theme_dir())
if file_dir != user_dir:
text = "Please install your Lighthouse theme into the user theme directory:\n\n" + user_dir
disassembler.warning(text)
return
#
# remember the last directory we were in (parsed from a selected file)
# for the next time the user comes to load coverage files
#
if filename:
self._last_directory = os.path.dirname(filename) + os.sep
# log the captured (selected) filenames from the dialog
logger.debug("Captured filename from theme file dialog: '%s'" % filename)
#
# before applying the selected lighthouse theme, we should ensure that
# we know if the user is using a light or dark disassembler theme as
# it may change which colors get used by the lighthouse theme
#
self._refresh_theme_hints()
# if the selected theme fails to load, throw a visible warning
if not self._load_theme(filename):
disassembler.warning(
"Failed to load Lighthouse user theme!\n\n"
"Please check the console for more information..."
)
return
# since everthing looks like it loaded okay, save this as the preferred theme
with open(os.path.join(self.get_user_theme_dir(), ".active_theme"), "w") as f:
f.write(filename)
def refresh_theme(self):
"""
Dynamically compute palette color based on IDA theme.
Depending on if IDA is using a dark or light theme, we *try*
to select colors that will hopefully keep things most readable.
"""
self._refresh_theme_hints()
self._load_preferred_theme()
#--------------------------------------------------------------------------
# Theme Internals
#--------------------------------------------------------------------------
def _populate_user_theme_dir(self):
"""
Create the Lighthouse user theme directory and install default themes.
"""
# create the user theme directory if it does not exist
user_theme_dir = self.get_user_theme_dir()
makedirs(user_theme_dir)
# enumerate all in-box / default themes
plugin_theme_dir = self.get_plugin_theme_dir()
json_files = glob.glob(os.path.join(plugin_theme_dir, "*.json"))
# copy the default themes into the user directory if they don't exist
for default_theme_file in json_files:
theme_name = os.path.basename(default_theme_file)
#
# check if lighthouse has copied the default themes into the user
# theme directory before. when 'default' themes exists, skip them
# rather than overwriting... as the user may have modified it
#
user_theme_file = os.path.join(user_theme_dir, theme_name)
if os.path.exists(user_theme_file):
continue
# copy the in-box themes to the user theme directory
shutil.copy(default_theme_file, user_theme_file)
#
# if the user tries to switch themes, ensure the file dialog will start
# in their user theme directory
#
self._last_directory = user_theme_dir
def _load_required_fields(self):
"""
Load the required theme fields from a donor in-box theme.
"""
logger.debug("Loading required theme fields from disk...")
# load a known-good theme from the plugin's in-box themes
filepath = os.path.join(self.get_plugin_theme_dir(), self._default_themes["dark"])
theme = self._read_theme(filepath)
#
# save all the defined fields in this 'good' theme as a ground truth
# to validate user themes against...
#
self._required_fields = theme["fields"].keys()
def _load_preferred_theme(self, fallback=False):
"""
Load the user's preferred theme, or the one hinted at by the theme subsystem.
"""
logger.debug("Loading preferred theme from disk...")
user_theme_dir = self.get_user_theme_dir()
# attempt te read the name of the user's active / preferred theme name
active_filepath = os.path.join(user_theme_dir, ".active_theme")
try:
theme_name = open(active_filepath).read().strip()
logger.debug(" - Got '%s' from .active_theme" % theme_name)
except (OSError, IOError):
theme_name = None
#
# if the user does not have a preferred theme set yet, we will try to
# pick one for them based on their disassembler UI.
#
if not theme_name:
#
# we have two themes hints which roughly correspond to the tone of
# their disassembly background, and then their general Qt widgets.
#
# if both themes seem to align on style (eg the user is using a
# 'dark' UI), then we will select the appropriate in-box theme
#
if self._user_qt_hint == self._user_disassembly_hint:
theme_name = self._default_themes[self._user_qt_hint]
logger.debug(" - No preferred theme, hints suggest theme '%s'" % theme_name)
#
# the UI hints don't match, so the user is using some ... weird
# mismatched theming in their disassembler. let's just default to
# the 'dark' lighthouse theme as it is more robust
#
else:
theme_name = self._default_themes["dark"]
#
# should the user themes be in a bad state, we can fallback to the
# in-box themes. this should only happen if users malform the default
# themes that have been copied into the user theme directory
#
if fallback:
theme_path = os.path.join(self.get_plugin_theme_dir(), theme_name)
else:
theme_path = os.path.join(self.get_user_theme_dir(), theme_name)
# finally, attempt to load & apply the theme -- return True/False
return self._load_theme(theme_path)
def _validate_theme(self, theme):
"""
Pefrom rudimentary theme validation.
"""
logger.debug(" - Validating theme fields for '%s'..." % theme["name"])
user_fields = theme.get("fields", None)
if not user_fields:
lmsg("Could not find theme 'fields' definition")
return False
# check that all the 'required' fields exist in the given theme
for field in self._required_fields:
if field not in user_fields:
lmsg("Could not find required theme field '%s'" % field)
return False
# theme looks good enough for now...
return True
def _load_theme(self, filepath):
"""
Load and apply the Lighthouse theme at the given filepath.
"""
# attempt to read json theme from disk
try:
theme = self._read_theme(filepath)
# reading file from dsik failed
except OSError:
lmsg("Could not open theme file at '%s'" % filepath)
return False
# JSON decoding failed
except JSONDecodeError as e:
lmsg("Failed to decode theme '%s' to json" % filepath)
lmsg(" - " + str(e))
return False
# do some basic sanity checking on the given theme file
if not self._validate_theme(theme):
return False
# try applying the loaded theme to Lighthouse
try:
self._apply_theme(theme)
except Exception as e:
lmsg("Failed to load Lighthouse user theme\n%s" % e)
return False
# return success
self._notify_theme_changed()
return True
def _read_theme(self, filepath):
"""
Parse the Lighthouse theme file from the given filepath.
"""
logger.debug(" - Reading theme file '%s'..." % filepath)
# attempt to load the theme file contents from disk
raw_theme = open(filepath, "r").read()
# convert the theme file contents to a json object/dict
theme = json.loads(raw_theme)
# all good
return theme
def _apply_theme(self, theme):
"""
Apply the given theme definition to Lighthouse.
"""
logger.debug(" - Applying theme '%s'..." % theme["name"])
colors = theme["colors"]
for field_name, color_entry in theme["fields"].items():
# color has 'light' and 'dark' variants
if isinstance(color_entry, list):
color_name = self._pick_best_color(field_name, color_entry)
# there is only one color defined
else:
color_name = color_entry
# load the color
color_value = colors[color_name]
color = QtGui.QColor(*color_value)
# set theme self.[field_name] = color
setattr(self, field_name, color)
# HACK: IDA uses BBGGRR for its databasse highlighting
if disassembler.NAME == "IDA":
rgb = int(self.coverage_paint.name()[1:], 16)
self.coverage_paint = swap_rgb(rgb)
# all done, save the theme in case we need it later
self.theme = theme
def _pick_best_color(self, field_name, color_entry):
"""
Given a variable color_entry, select the best color based on the theme hints.
"""
assert len(color_entry) == 2, "Malformed color entry, must be (dark, light)"
dark, light = color_entry
# coverage_paint is actually the only field that applies to disas...
if field_name == "coverage_paint":
if self._user_disassembly_hint == "dark":
return dark
else:
return light
# the rest of the fields should be considered 'qt' fields
if self._user_qt_hint == "dark":
return dark
return light
#--------------------------------------------------------------------------
# Theme Inference
#--------------------------------------------------------------------------
def _refresh_theme_hints(self):
"""
Peek at the UI context to infer what kind of theme the user might be using.
"""
self._user_qt_hint = self._qt_theme_hint()
self._user_disassembly_hint = self._disassembly_theme_hint() or "dark"
def _disassembly_theme_hint(self):
"""
Binary hint of the IDA color theme.
This routine returns a best effort hint as to what kind of theme is
in use for the IDA Views (Disas, Hex, HexRays, etc).
Returns 'dark' or 'light' indicating the user's theme
"""
#
# determine whether to use a 'dark' or 'light' paint based on the
# background color of the user's IDA text based windows
#
bg_color = disassembler.get_disassembly_background_color()
if not bg_color:
logger.debug(" - Failed to get hint for disassembly background...")
return None
# return 'dark' or 'light'
return test_color_brightness(bg_color)
def _qt_theme_hint(self):
"""
Binary hint of the Qt color theme.
This routine returns a best effort hint as to what kind of theme the
QtWdigets throughout IDA are using. This is to accomodate for users
who may be using Zyantific's IDASkins plugins (or others) to further
customize IDA's appearance.
Returns 'dark' or 'light' indicating the user's theme
"""
#
# to determine what kind of Qt based theme IDA is using, we create a
# test widget and check the colors put into the palette the widget
# inherits from the application (eg, IDA).
#
test_widget = QtWidgets.QWidget()
#
# in order to 'realize' the palette used to render (draw) the widget,
# it first must be made visible. since we don't want to be popping
# random widgets infront of the user, so we set this attribute such
# that we can silently bake the widget colors.
#
# NOTE/COMPAT: WA_DontShowOnScreen
#
# https://www.riverbankcomputing.com/news/pyqt-56
#
# lmao, don't ask me why they forgot about this attribute from 5.0 - 5.6
#
# IDA 9.2 SDK fix: migrate from PyQt5 to PySide6
# https://docs.hex-rays.com/user-guide/plugins/migrating-pyqt5-code-to-pyside6
if disassembler.NAME == "BINJA" or (disassembler.NAME == "IDA"
and disassembler._version_major == 9
and disassembler._version_minor >= 2):
test_widget.setAttribute(QtCore.Qt.WA_DontShowOnScreen)
else:
test_widget.setAttribute(103) # taken from http://doc.qt.io/qt-5/qt.html
# render the (invisible) widget
test_widget.show()
# now we farm the background color from the qwidget
bg_color = test_widget.palette().color(QtGui.QPalette.Window)
# 'hide' & delete the widget
test_widget.hide()
test_widget.deleteLater()
# return 'dark' or 'light'
return test_color_brightness(bg_color)

Before

Width:  |  Height:  |  Size: 8.7 KiB

After

Width:  |  Height:  |  Size: 8.7 KiB

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Before

Width:  |  Height:  |  Size: 8.1 KiB

After

Width:  |  Height:  |  Size: 8.1 KiB

Before

Width:  |  Height:  |  Size: 8.1 KiB

After

Width:  |  Height:  |  Size: 8.1 KiB

@@ -0,0 +1,60 @@
{
"name": "Dullien",
"colors":
{
"black": [0, 0, 0],
"white": [255, 255, 255],
"gray": [100, 100, 100],
"lightGray": [220, 220, 220],
"red": [255, 0, 0],
"blue": [0, 0, 255],
"lightRed": [240, 150, 150],
"lightGreen": [150, 240, 150],
"darkGreen": [0, 60, 0],
"lightBlue": [140, 170, 220]
},
"fields":
{
"coverage_paint": ["darkGreen", "lightGreen"],
"table_text": "black",
"table_grid": "gray",
"table_coverage_none": "lightRed",
"table_coverage_bad": "lightRed",
"table_coverage_good": "lightGreen",
"table_background": "white",
"table_selection": "lightBlue",
"html_summary_text": "gray",
"html_table_header": "lightGray",
"html_page_background": "white",
"shell_text": "black",
"shell_text_valid": "blue",
"shell_text_invalid": "red",
"shell_highlight_invalid": "lightRed",
"shell_border": "gray",
"shell_border_focus": "lightBlue",
"shell_background": "white",
"shell_hint_text": "black",
"shell_hint_background": "white",
"logic_token": "red",
"comma_token": "black",
"paren_token": "black",
"coverage_token": "blue",
"combobox_text": "black",
"combobox_selection_text": "white",
"combobox_selection_background": "lightBlue",
"combobox_border": "gray",
"combobox_border_focus": "lightBlue",
"combobox_background": "white"
}
}
@@ -0,0 +1,69 @@
{
"name": "Long Night",
"author": "https://github.com/ioncodes",
"colors":
{
"black": [33, 33, 33],
"white": [241, 239, 236],
"darkGray": [20, 20, 20],
"darkGray2": [30, 30, 30],
"darkGray3": [54, 54, 54],
"gray": [100, 100, 100],
"lightGray": [55, 55, 55],
"red": [188, 101, 141],
"green": [64, 255, 64],
"blue": [104, 134, 197],
"lightBlue": [128, 200, 255],
"darkBlue": [44, 44, 44],
"purple": [121, 104, 197],
"focusRed": [255, 83, 112],
"selection": [67, 67, 67]
},
"fields":
{
"coverage_paint": ["darkBlue", "lightBlue"],
"table_text": "white",
"table_grid": "black",
"table_coverage_none": "black",
"table_coverage_bad": "red",
"table_coverage_good": "blue",
"table_background": "black",
"table_selection": "purple",
"html_summary_text": "white",
"html_table_header": "white",
"html_page_background": "black",
"shell_text": "white",
"shell_text_valid": "lightBlue",
"shell_text_invalid": "red",
"shell_highlight_invalid": "red",
"shell_border": "lightGray",
"shell_border_focus": "focusRed",
"shell_background": "black",
"shell_hint_text": "white",
"shell_hint_background": "black",
"logic_token": "red",
"comma_token": "green",
"paren_token": "green",
"coverage_token": "lightBlue",
"combobox_text": "white",
"combobox_selection_text": "white",
"combobox_selection_background": "selection",
"combobox_border": "lightGray",
"combobox_border_focus": "focusRed",
"combobox_background": "black"
}
}
@@ -0,0 +1,66 @@
{
"name": "Synth",
"colors":
{
"black": [0, 0, 0],
"white": [255, 255, 255],
"darkGray": [20, 20, 20],
"darkGray2": [30, 30, 30],
"darkGray3": [54, 54, 54],
"gray": [100, 100, 100],
"lightGray": [180, 180, 180],
"red": [221, 0, 0],
"green": [64, 255, 64],
"blue": [51, 153, 255],
"lightBlue": [128, 200, 255],
"darkBlue": [0, 0, 153],
"purple": [100, 0, 130]
},
"fields":
{
"coverage_paint": ["darkBlue", "lightBlue"],
"table_text": "white",
"table_grid": "black",
"table_coverage_none": "darkGray2",
"table_coverage_bad": "red",
"table_coverage_good": "blue",
"table_background": "darkGray",
"table_selection": "purple",
"html_summary_text": "lightGray",
"html_table_header": "gray",
"html_page_background": "darkGray3",
"shell_text": "white",
"shell_text_valid": "lightBlue",
"shell_text_invalid": "red",
"shell_highlight_invalid": "red",
"shell_border": "gray",
"shell_border_focus": "lightGray",
"shell_background": "darkGray2",
"shell_hint_text": "white",
"shell_hint_background": "darkGray3",
"logic_token": "red",
"comma_token": "green",
"paren_token": "green",
"coverage_token": "lightBlue",
"combobox_text": "white",
"combobox_selection_text": "white",
"combobox_selection_background": "blue",
"combobox_border": "gray",
"combobox_border_focus": "lightGray",
"combobox_background": "darkGray2"
}
}
+4
View File
@@ -0,0 +1,4 @@
from .python import *
from .misc import *
from .debug import *
from .log import lmsg, logging_started, start_logging
+97
View File
@@ -0,0 +1,97 @@
import sys
import inspect
import cProfile
import traceback
from .log import lmsg
from .disassembler import disassembler
#------------------------------------------------------------------------------
# Debug
#------------------------------------------------------------------------------
#
# This file contains random snippets of code that I frequently use while
# developing and debugging parts of lighthouse. I don't expect any of this
# code to be active or in use for major releases.
#
#------------------------------------------------------------------------------
# Call Profiling
#------------------------------------------------------------------------------
pr = cProfile.Profile()
def profile(func):
"""
A simple function profiling decorator.
"""
def wrap(*args, **kwargs):
global pr
pr.enable()
result = func(*args, **kwargs)
pr.disable()
pr.print_stats(sort="tottime")
return result
return wrap
#------------------------------------------------------------------------------
# Function Line Profiling
#------------------------------------------------------------------------------
try:
import pprofile
def line_profile(func):
def profiled_func(*args, **kwargs):
try:
profiler = pprofile.ThreadProfile()
with profiler():
return func(*args, **kwargs)
finally:
caller_file = inspect.getfile(func)
profiler.annotate(pprofile.EncodeOrReplaceWriter(sys.stdout), [caller_file])
return profiled_func
except ImportError:
def line_profile(func):
def nothing(*args, **kwargs):
return func(*args, **kwargs)
return nothing
#------------------------------------------------------------------------------
# Error Logging
#------------------------------------------------------------------------------
def catch_errors(func):
"""
A simple catch-all decorator to try and log Lighthouse crashes.
This will be used to wrap high-risk or new code, in an effort to catch
and fix bugs without leaving the user in a stuck state.
"""
def wrap(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception:
exc_type, exc_value, exc_traceback = sys.exc_info()
st = traceback.format_stack()[:-1]
ex = traceback.format_exception(exc_type, exc_value, exc_traceback)[2:]
# log full crashing callstack to console
full_error = st + ex
full_error = ''.join(full_error).splitlines()
lmsg("Lighthouse experienced an error... please file an issue on GitHub with this traceback:")
lmsg("")
for line in full_error:
lmsg(line)
# notify the user that a bug occurred
disassembler.warning(
"Something bad happend to Lighthouse :-(\n\n" \
"Please file an issue on GitHub with the traceback from your disassembler console."
)
return wrap
@@ -0,0 +1,43 @@
#--------------------------------------------------------------------------
# Disassembler API Selector
#--------------------------------------------------------------------------
#
# this file will select and load the shimmed disassembler API for the
# appropriate (current) disassembler platform.
#
# see api.py for more details regarding this API shim layer
#
disassembler = None
#--------------------------------------------------------------------------
# IDA API Shim
#--------------------------------------------------------------------------
if disassembler == None:
try:
from .ida_api import IDACoreAPI, IDAContextAPI
disassembler = IDACoreAPI()
DisassemblerContextAPI = IDAContextAPI
except ImportError:
pass
#--------------------------------------------------------------------------
# Binary Ninja API Shim
#--------------------------------------------------------------------------
if disassembler == None:
try:
from .binja_api import BinjaCoreAPI, BinjaContextAPI
disassembler = BinjaCoreAPI()
DisassemblerContextAPI = BinjaContextAPI
except ImportError:
pass
#--------------------------------------------------------------------------
# Unknown Disassembler
#--------------------------------------------------------------------------
if disassembler == None:
raise NotImplementedError("Unknown or unsupported disassembler!")
+466
View File
@@ -0,0 +1,466 @@
import abc
import logging
from ..qt import QT_AVAILABLE, QtGui, QtWidgets
logger = logging.getLogger("Lighthouse.API")
#------------------------------------------------------------------------------
# Disassembler API
#------------------------------------------------------------------------------
#
# the purpose of this file is to provide an abstraction layer for the more
# generic disassembler APIs required by the plugin codebase. we strive to
# use (or extend) this API for the bulk of our disassembler operations,
# making the plugin as disassembler-agnostic as possible.
#
# by subclassing the templated classes below, the plugin can support other
# disassembler plaforms relatively easily. at the moment, implementing these
# subclasses is ~50% of the work that is required to add lighthouse support
# to any given interactive disassembler.
#
class DisassemblerCoreAPI(object):
"""
An abstract implementation of the core disassembler APIs.
"""
__metaclass__ = abc.ABCMeta
# the name of the disassembler framework, eg 'IDA' or 'BINJA'
NAME = NotImplemented
@abc.abstractmethod
def __init__(self):
self._ctxs = {}
# required version fields
self._version_major = NotImplemented
self._version_minor = NotImplemented
self._version_patch = NotImplemented
if not self.headless and QT_AVAILABLE:
from ..qt import WaitBox
self._waitbox = WaitBox("Please wait...")
else:
self._waitbox = None
def __delitem__(self, key):
del self._ctxs[key]
def __getitem__(self, key):
return self._ctxs[key]
def __setitem__(self, key, value):
self._ctxs[key] = value
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
def version_major(self):
"""
Return the major version number of the disassembler framework.
"""
assert self._version_major != NotImplemented
return self._version_major
def version_minor(self):
"""
Return the minor version number of the disassembler framework.
"""
assert self._version_patch != NotImplemented
return self._version_patch
def version_patch(self):
"""
Return the patch version number of the disassembler framework.
"""
assert self._version_patch != NotImplemented
return self._version_patch
@abc.abstractproperty
def headless(self):
"""
Return a bool indicating if the disassembler is running without a GUI.
"""
pass
#--------------------------------------------------------------------------
# Synchronization Decorators
#--------------------------------------------------------------------------
@staticmethod
def execute_read(function):
"""
Thread-safe function decorator to READ from the disassembler database.
"""
raise NotImplementedError("execute_read() has not been implemented")
@staticmethod
def execute_write(function):
"""
Thread-safe function decorator to WRITE to the disassembler database.
"""
raise NotImplementedError("execute_write() has not been implemented")
@staticmethod
def execute_ui(function):
"""
Thread-safe function decorator to perform UI disassembler actions.
This function is generally used for executing UI (Qt) events from
a background thread. as such, your implementation is expected to
transfer execution to the main application thread where it is safe to
perform Qt actions.
"""
raise NotImplementedError("execute_ui() has not been implemented")
#--------------------------------------------------------------------------
# Disassembler Universal APIs
#--------------------------------------------------------------------------
@abc.abstractmethod
def get_disassembler_user_directory(self):
"""
Return the 'user' directory for the disassembler.
"""
pass
@abc.abstractmethod
def get_disassembly_background_color(self):
"""
Return the background color of the disassembly text view.
"""
pass
@abc.abstractmethod
def is_msg_inited(self):
"""
Return a bool if the disassembler output window is initialized.
"""
pass
def warning(self, text):
"""
Display a warning dialog box with the given text.
"""
msgbox = QtWidgets.QMessageBox()
before = msgbox.sizeHint().width()
msgbox.setIcon(QtWidgets.QMessageBox.Critical)
after = msgbox.sizeHint().width()
icon_width = after - before
msgbox.setWindowTitle("Lighthouse Warning")
msgbox.setText(text)
font = msgbox.font()
fm = QtGui.QFontMetricsF(font)
text_width = fm.size(0, text).width()
# don't ask...
spacer = QtWidgets.QSpacerItem(int(text_width*1.1 + icon_width), 0, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Expanding)
layout = msgbox.layout()
layout.addItem(spacer, layout.rowCount(), 0, 1, layout.columnCount())
msgbox.setLayout(layout)
# show the dialog
msgbox.exec_()
@abc.abstractmethod
def message(self, function_address, new_name):
"""
Print a message to the disassembler console.
"""
pass
#--------------------------------------------------------------------------
# UI APIs
#--------------------------------------------------------------------------
#
# NOTE: please note, these APIs and their usage is a little ... obtuse.
# this is primarily because the IDA & Binja dockable widget management
# system is rather different.
#
# these APIs make a best effort in unifiying the systems in a manner that
# works for this project. it may not be ideal for the universal use case
# but is good enough for our purposes.
#
@abc.abstractmethod
def register_dockable(self, dockable_name, create_widget_callback):
"""
Register a callback with the disassembler to generate dockable widgets.
- dockable_name: the name of the window / dockable to be created
- create_widget_callback: a static function that return a new dockable widget
The registered callback will be called automatically in certain events
that will preclude the display of the dockable_name. These events
may include a new databse being opened, or show_dockable being called.
"""
pass
@abc.abstractmethod
def create_dockable_widget(self, parent, dockable_name):
"""
Creates a dockable widget.
This function should generally be called within the create_widget_callback
described in register_dockable(...).
"""
pass
@abc.abstractmethod
def show_dockable(self, dockable_name):
"""
Show the named dockable widget.
"""
pass
@abc.abstractmethod
def hide_dockable(self, dockable_name):
"""
Hide the named dockable widget.
"""
pass
#------------------------------------------------------------------------------
# WaitBox API
#------------------------------------------------------------------------------
def show_wait_box(self, text, modal=True):
"""
Show the disassembler universal WaitBox.
"""
assert QT_AVAILABLE, "This function can only be used in a Qt runtime"
self._waitbox.set_text(text)
self._waitbox.show(modal)
def hide_wait_box(self):
"""
Hide the disassembler universal WaitBox.
"""
assert QT_AVAILABLE, "This function can only be used in a Qt runtime"
self._waitbox.hide()
def replace_wait_box(self, text):
"""
Replace the text in the disassembler universal WaitBox.
"""
assert QT_AVAILABLE, "This function can only be used in a Qt runtime"
self._waitbox.set_text(text)
#------------------------------------------------------------------------------
# Disassembler Contextual API
#------------------------------------------------------------------------------
class DisassemblerContextAPI(object):
"""
An abstract implementation of database/contextual disassembler APIs.
"""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def __init__(self, dctx):
self.dctx = dctx
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@abc.abstractproperty
def busy(self):
"""
Return a bool indicating if the disassembler is busy / processing.
"""
pass
#--------------------------------------------------------------------------
# API Shims
#--------------------------------------------------------------------------
@abc.abstractmethod
def get_current_address(self):
"""
Return the current cursor address in the open database.
"""
pass
@abc.abstractmethod
def get_database_directory(self):
"""
Return the directory for the open database.
"""
pass
@abc.abstractmethod
def get_function_addresses(self):
"""
Return all defined function addresses in the open database.
"""
pass
@abc.abstractmethod
def get_function_name_at(self, address):
"""
Return the name of the function at the given address.
This is generally the user-facing/demangled name seen throughout the
disassembler and is probably what you want to use for almost everything.
"""
pass
@abc.abstractmethod
def get_function_raw_name_at(self, address):
"""
Return the raw (eg, unmangled) name of the function at the given address.
On the backend, most disassemblers store what is called the 'true' or
'raw' (eg, unmangled) function name.
"""
pass
@abc.abstractmethod
def get_imagebase(self):
"""
Return the base address of the open database.
"""
pass
@abc.abstractmethod
def get_root_filename(self):
"""
Return the root executable (file) name used to generate the database.
"""
pass
@abc.abstractmethod
def navigate(self, address, function_address=None):
"""
Jump the disassembler UI to the given address.
"""
pass
@abc.abstractmethod
def navigate_to_function(self, function_address, address):
"""
Jump the disassembler UI to the given address, within a function.
"""
pass
@abc.abstractmethod
def set_function_name_at(self, function_address, new_name):
"""
Set the function name at given address.
"""
pass
#--------------------------------------------------------------------------
# Hooks API
#--------------------------------------------------------------------------
@abc.abstractmethod
def create_rename_hooks(self, function_address, new_name):
"""
Returns a hooking object that can capture rename events for this context.
"""
pass
#--------------------------------------------------------------------------
# Function Prefix API
#--------------------------------------------------------------------------
#
# the following APIs are used to apply or clear prefixes to multiple
# functions in the disassembly database. the only thing you're expected
# to do here is select an appropriate PREFIX_SEPARATOR.
#
# your prefix separator is expected to be something unique, that a user
# would probably *never* put into their function name themselves but
# looks somewhat normal.
#
# in IDA, putting '%' in a function name appears as '_' in the function
# list, so we use that as a prefix separator. in Binary Ninja, we use a
# unicode character that looks like an underscore character.
#
# it is probably safe to steal the unicode char we use with binja for
# your own implementation.
#
PREFIX_SEPARATOR = NotImplemented
def prefix_function(self, function_address, prefix):
"""
Prefix a function name with the given string.
"""
original_name = self.get_function_raw_name_at(function_address)
new_name = str(prefix) + self.PREFIX_SEPARATOR + str(original_name)
# rename the function with the newly prefixed name
self.set_function_name_at(function_address, new_name)
def prefix_functions(self, function_addresses, prefix):
"""
Prefix a list of functions with the given string.
"""
for function_address in function_addresses:
self.prefix_function(function_address, prefix)
def clear_prefix(self, function_address):
"""
Clear the prefix from a given function.
"""
prefixed_name = self.get_function_raw_name_at(function_address)
#
# split the function name on the last prefix separator, saving
# everything that comes after (eg, the original func name)
#
new_name = prefixed_name.rsplit(self.PREFIX_SEPARATOR)[-1]
# the name doesn't appear to have had a prefix, nothing to do...
if new_name == prefixed_name:
return
# rename the function with the prefix(s) now stripped
self.set_function_name_at(function_address, new_name)
def clear_prefixes(self, function_addresses):
"""
Clear the prefix from a list of given functions.
"""
for function_address in function_addresses:
self.clear_prefix(function_address)
#------------------------------------------------------------------------------
# Hooking
#------------------------------------------------------------------------------
class RenameHooks(object):
"""
An abstract implementation of disassembler hooks to capture rename events.
"""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def hook(self):
"""
Install hooks into the disassembler that capture rename events.
"""
pass
@abc.abstractmethod
def unhook(self):
"""
Remove hooks used to capture rename events.
"""
pass
def renamed(self, address, new_name):
"""
This will be hooked by Lighthouse at runtime to capture rename events.
"""
pass
@@ -0,0 +1,423 @@
# -*- coding: utf-8 -*-
import os
import sys
import logging
import functools
import threading
import collections
from .api import DisassemblerCoreAPI, DisassemblerContextAPI
from ..qt import *
from ..misc import is_mainthread, not_mainthread
import binaryninja
from binaryninja import PythonScriptingInstance, binaryview
from binaryninja.plugin import BackgroundTaskThread
logger = logging.getLogger("Lighthouse.API.Binja")
#------------------------------------------------------------------------------
# Utils
#------------------------------------------------------------------------------
def execute_sync(function):
"""
Synchronize with the disassembler for safe database access.
"""
@functools.wraps(function)
def wrapper(*args, **kwargs):
#
# in Binary Ninja, it is only safe to access the BNDB from a thread
# that is *not* the mainthread. if we appear to already be in a
# background thread of some sort, simply execute the given function
#
if not is_mainthread():
return function(*args, **kwargs)
#
# if we are in the mainthread, we need to schedule a background
# task to perform our database task/function instead
#
# this inline function definition is technically what will execute
# in a database-safe background thread. we use this thunk to
# capture any output the function may want to return to the user.
#
output = [None]
def thunk():
output[0] = function(*args, **kwargs)
return 1
class DatabaseRead(BackgroundTaskThread):
"""
A stub task to safely read from the BNDB.
"""
def __init__(self, text, function):
super(DatabaseRead, self).__init__(text, False)
self._task_to_run = function
def run(self):
self._task_to_run()
self.finish()
# schedule the databases read and wait for its completion
t = DatabaseRead("Accessing database...", thunk)
t.start()
t.join()
# return the output of the synchronized execution / read
return output[0]
return wrapper
#------------------------------------------------------------------------------
# Disassembler API
#------------------------------------------------------------------------------
class BinjaCoreAPI(DisassemblerCoreAPI):
NAME = "BINJA"
def __init__(self):
super(BinjaCoreAPI, self).__init__()
self._init_version()
def _init_version(self):
version_string = binaryninja.core_version()
# retrieve Binja's version #
if "-" in version_string: # dev
disassembler_version = version_string.split("-", 1)[0]
else: # commercial, personal
disassembler_version = version_string.split(" ", 1)[0]
major, minor, patch, *_= disassembler_version.split(".") + ['0']
# save the version number components for later use
self._version_major = major
self._version_minor = minor
self._version_patch = patch
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def headless(self):
return not(binaryninja.core_ui_enabled())
#--------------------------------------------------------------------------
# Synchronization Decorators
#--------------------------------------------------------------------------
@staticmethod
def execute_read(function):
return execute_sync(function)
@staticmethod
def execute_write(function):
return execute_sync(function)
@staticmethod
def execute_ui(function):
@functools.wraps(function)
def wrapper(*args, **kwargs):
ff = functools.partial(function, *args, **kwargs)
# if we are already in the main (UI) thread, execute now
if is_mainthread():
ff()
return
# schedule the task to run in the main thread
binaryninja.execute_on_main_thread(ff)
return wrapper
#--------------------------------------------------------------------------
# API Shims
#--------------------------------------------------------------------------
def get_disassembler_user_directory(self):
return os.path.split(binaryninja.user_plugin_path())[0]
def get_disassembly_background_color(self):
return binaryninjaui.getThemeColor(binaryninjaui.ThemeColor.LinearDisassemblyBlockColor)
def is_msg_inited(self):
return True
@execute_ui.__func__
def warning(self, text):
super(BinjaCoreAPI, self).warning(text)
def message(self, message):
print(message)
#--------------------------------------------------------------------------
# UI API Shims
#--------------------------------------------------------------------------
def register_dockable(self, dockable_name, create_widget_callback):
dock_handler = DockHandler.getActiveDockHandler()
dock_handler.addDockWidget(dockable_name, create_widget_callback, QtCore.Qt.RightDockWidgetArea, QtCore.Qt.Horizontal, False)
def create_dockable_widget(self, parent, dockable_name):
return DockableWidget(parent, dockable_name)
def show_dockable(self, dockable_name):
dock_handler = DockHandler.getActiveDockHandler()
dock_handler.setVisible(dockable_name, True)
def hide_dockable(self, dockable_name):
dock_handler = DockHandler.getActiveDockHandler()
dock_handler.setVisible(dockable_name, False)
#--------------------------------------------------------------------------
# XXX Binja Specfic Helpers
#--------------------------------------------------------------------------
def binja_get_bv_from_dock(self):
dh = DockHandler.getActiveDockHandler()
if not dh:
return None
vf = dh.getViewFrame()
if not vf:
return None
vi = vf.getCurrentViewInterface()
bv = vi.getData()
return bv
class BinjaContextAPI(DisassemblerContextAPI):
def __init__(self, dctx):
super(BinjaContextAPI, self).__init__(dctx)
self.bv = dctx
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def busy(self):
return self.bv.analysis_info.state != binaryninja.enums.AnalysisState.IdleState
#--------------------------------------------------------------------------
# API Shims
#--------------------------------------------------------------------------
def get_current_address(self):
# TODO/V35: this doen't work because of the loss of context bug...
#ctx = UIContext.activeContext()
#ah = ctx.contentActionHandler()
#ac = ah.actionContext()
#return ac.address
dh = DockHandler.getActiveDockHandler()
if not dh:
return 0
vf = dh.getViewFrame()
if not vf:
return 0
ac = vf.actionContext()
if not ac:
return 0
return ac.address
@BinjaCoreAPI.execute_read
def get_database_directory(self):
return os.path.dirname(self.bv.file.filename)
@not_mainthread
def get_function_addresses(self):
return [x.start for x in self.bv.functions]
def get_function_name_at(self, address):
func = self.bv.get_function_at(address)
if not func:
return None
return func.symbol.short_name
@BinjaCoreAPI.execute_read
def get_function_raw_name_at(self, address):
func = self.bv.get_function_at(address)
if not func:
return None
return func.name
@not_mainthread
def get_imagebase(self):
return self.bv.start
@not_mainthread
def get_root_filename(self):
return os.path.basename(self.bv.file.original_filename)
def navigate(self, address):
return self.bv.navigate(self.bv.view, address)
def navigate_to_function(self, function_address, address):
#
# attempt a more 'precise' jump, that guarantees to place us within
# the given function. this is necessary when trying to jump to an
# an address/node that is shared between two functions
#
funcs = self.bv.get_functions_containing(address)
if not funcs:
return False
#
# try to find the function that contains our target (address) and has
# a matching function start...
#
for func in funcs:
if func.start == function_address:
break
# no matching function ???
else:
return False
dh = DockHandler.getActiveDockHandler()
vf = dh.getViewFrame()
vi = vf.getCurrentViewInterface()
return vi.navigateToFunction(func, address)
def set_function_name_at(self, function_address, new_name):
func = self.bv.get_function_at(function_address)
if not func:
return
if new_name == "":
new_name = None
state = self.bv.begin_undo_actions()
func.name = new_name
self.bv.commit_undo_actions(state)
#--------------------------------------------------------------------------
# Hooks API
#--------------------------------------------------------------------------
def create_rename_hooks(self):
return RenameHooks(self.bv)
#------------------------------------------------------------------------------
# Function Prefix API
#------------------------------------------------------------------------------
PREFIX_SEPARATOR = "" # Unicode 0x2581
#------------------------------------------------------------------------------
# Hooking
#------------------------------------------------------------------------------
class RenameHooks(binaryview.BinaryDataNotification):
"""
A hooking class to catch symbol changes in Binary Ninja.
"""
def __init__(self, bv):
self._bv = bv
def hook(self):
self._bv.register_notification(self)
def unhook(self):
self._bv.unregister_notification(self)
def symbol_added(self, *args):
self.__symbol_handler(*args)
def symbol_updated(self, *args):
self.__symbol_handler(*args)
def symbol_removed(self, *args):
self.__symbol_handler(*args, True)
def __symbol_handler(self, view, symbol, removed=False):
func = self._bv.get_function_at(symbol.address)
if not func or not func.start == symbol.address:
return
if removed:
self.name_changed(symbol.address, "sub_%x" % symbol.address)
else:
self.name_changed(symbol.address, symbol.name)
def name_changed(self, address, name):
"""
A placeholder callback, which will get hooked / replaced once live.
"""
pass
#------------------------------------------------------------------------------
# UI
#------------------------------------------------------------------------------
if QT_AVAILABLE:
import binaryninjaui
from binaryninjaui import DockHandler, DockContextHandler, UIContext, UIActionHandler
class DockableWidget(QtWidgets.QWidget, DockContextHandler):
"""
A dockable Qt widget for Binary Ninja.
"""
def __init__(self, parent, name):
QtWidgets.QWidget.__init__(self, parent)
DockContextHandler.__init__(self, self, name)
self.actionHandler = UIActionHandler()
self.actionHandler.setupActionHandler(self)
self._active_view = None
self._visible_for_view = collections.defaultdict(lambda: False)
@property
def visible(self):
return self._visible_for_view[self._active_view]
@visible.setter
def visible(self, is_visible):
self._visible_for_view[self._active_view] = is_visible
def shouldBeVisible(self, view_frame):
if not view_frame:
return False
if USING_PYSIDE6:
import shiboken6 as shiboken
else:
import shiboken2 as shiboken
vf_ptr = shiboken.getCppPointer(view_frame)[0]
return self._visible_for_view[vf_ptr]
def notifyVisibilityChanged(self, is_visible):
self.visible = is_visible
def notifyViewChanged(self, view_frame):
if not view_frame:
self._active_view = None
return
if USING_PYSIDE6:
import shiboken6 as shiboken
else:
import shiboken2 as shiboken
self._active_view = shiboken.getCppPointer(view_frame)[0]
if self.visible:
dock_handler = DockHandler.getActiveDockHandler()
dock_handler.setVisible(self.m_name, True)
@@ -0,0 +1,613 @@
import os
import sys
import time
import logging
import binascii
import tempfile
import functools
import idaapi
import idautils
if int(idaapi.get_kernel_version()[0]) < 7:
idaapi.warning("Lighthouse has deprecated support for IDA 6, please upgrade.")
raise ImportError
from .api import DisassemblerCoreAPI, DisassemblerContextAPI
from ..qt import *
from ..misc import is_mainthread, get_string_between
logger = logging.getLogger("Lighthouse.API.IDA")
#------------------------------------------------------------------------------
# Utils
#------------------------------------------------------------------------------
def execute_sync(function, sync_type):
"""
Synchronize with the disassembler for safe database access.
Modified from https://github.com/vrtadmin/FIRST-plugin-ida
"""
@functools.wraps(function)
def wrapper(*args, **kwargs):
output = [None]
#
# this inline function definition is technically what will execute
# in the context of the main thread. we use this thunk to capture
# any output the function may want to return to the user.
#
def thunk():
output[0] = function(*args, **kwargs)
return 1
if is_mainthread():
thunk()
else:
idaapi.execute_sync(thunk, sync_type)
# return the output of the synchronized execution
return output[0]
return wrapper
#------------------------------------------------------------------------------
# Disassembler Core API (universal)
#------------------------------------------------------------------------------
class IDACoreAPI(DisassemblerCoreAPI):
NAME = "IDA"
def __init__(self):
super(IDACoreAPI, self).__init__()
self._dockable_factory = {}
self._dockable_widgets = {}
self._init_version()
def _init_version(self):
# retrieve IDA's version #
disassembler_version = idaapi.get_kernel_version()
major, minor = map(int, disassembler_version.split("."))
# save the version number components for later use
self._version_major = major
self._version_minor = minor
self._version_patch = 0
#--------------------------------------------------------------------------
# Properties
#--------------------------------------------------------------------------
@property
def headless(self):
return idaapi.cvar.batch
#--------------------------------------------------------------------------
# Synchronization Decorators
#--------------------------------------------------------------------------
@staticmethod
def execute_read(function):
return execute_sync(function, idaapi.MFF_READ)
@staticmethod
def execute_write(function):
return execute_sync(function, idaapi.MFF_WRITE)
@staticmethod
def execute_ui(function):
return execute_sync(function, idaapi.MFF_FAST)
#--------------------------------------------------------------------------
# API Shims
#--------------------------------------------------------------------------
def get_disassembler_user_directory(self):
return idaapi.get_user_idadir()
def get_disassembly_background_color(self):
"""
Get the background color of the IDA disassembly view.
Since there is no supported way to probe the palette & colors in use by
IDA, we must get creative. This function attempts to locate an IDA
disassembly view, and take a screenshot of said widget. It will then
attempt to extract the color of a single background pixel (hopefully).
"""
# method one
color = self._get_ida_bg_color_from_file()
if color:
return color
# method two, fallback
color = self._get_ida_bg_color_from_view()
if not color:
return None
# return the found background color
return color
def is_msg_inited(self):
return idaapi.is_msg_inited()
@execute_ui.__func__
def warning(self, text):
super(IDACoreAPI, self).warning(text)
@execute_ui.__func__
def message(self, message):
print(message)
#--------------------------------------------------------------------------
# UI API Shims
#--------------------------------------------------------------------------
def register_dockable(self, dockable_name, create_widget_callback):
self._dockable_factory[dockable_name] = create_widget_callback
def create_dockable_widget(self, parent, dockable_name):
# IDA 9.2 SDK fix: migrate from PyQt5 to PySide6
# https://docs.hex-rays.com/user-guide/plugins/migrating-pyqt5-code-to-pyside6
if USING_PYSIDE6:
from shiboken6 import wrapInstance
else:
import sip
def wrapInstance(ptr, base=None):
return sip.wrapinstance(int(ptr), base)
# create a dockable widget, and save a reference to it for later use
twidget = idaapi.create_empty_widget(dockable_name)
self._dockable_widgets[dockable_name] = twidget
# cast the IDA 'twidget' as a Qt widget for use
widget = wrapInstance(int(twidget), QtWidgets.QWidget)
widget.name = dockable_name
widget.visible = False
# return the dockable QtWidget / container
return widget
def show_dockable(self, dockable_name):
try:
make_dockable = self._dockable_factory[dockable_name]
except KeyError:
return False
parent, dctx = None, None # not used for IDA's integration
widget = make_dockable(dockable_name, parent, dctx)
# get the original twidget, so we can use it with the IDA API's
#twidget = idaapi.TWidget__from_ptrval__(widget) NOTE: IDA 7.2+ only...
twidget = self._dockable_widgets.pop(dockable_name)
if not twidget:
self.warning("Could not open dockable window, because its reference is gone?!?")
return
# show the dockable widget
flags = idaapi.PluginForm.WOPN_TAB | idaapi.PluginForm.WOPN_RESTORE | idaapi.PluginForm.WOPN_PERSIST
idaapi.display_widget(twidget, flags)
widget.visible = True
# attempt to 'dock' the widget in a reasonable location
for target in ["IDA View-A", "Pseudocode-A"]:
dwidget = idaapi.find_widget(target)
if dwidget:
idaapi.set_dock_pos(dockable_name, 'IDA View-A', idaapi.DP_RIGHT)
break
def hide_dockable(self, dockable_name):
pass # TODO/IDA: this should never actually be called by lighthouse right now
#--------------------------------------------------------------------------
# Theme Prediction Helpers (Internal)
#--------------------------------------------------------------------------
def _get_ida_bg_color_from_file(self):
"""
Get the background color of the IDA disassembly views via HTML export.
"""
logger.debug("Attempting to get IDA disassembly background color from HTML...")
#
# TODO/IDA: we need better early detection for if IDA is fully ready,
# this isn't effective and this func theme func can crash IDA if
# called too early (eg, during db load...).
#
# this isn't a problem now... but I don't want us to be at risk of
# hard crashing people's IDA in the future should we change something.
#
imagebase = idaapi.get_imagebase()
#if imagebase == idaapi.BADADDR:
# logger.debug(" - No imagebase...")
# return None
# create a temp file that we can write to
handle, path = tempfile.mkstemp()
os.close(handle)
# attempt to generate an 'html' dump of the first 0x20 bytes (instructions)
ida_fd = idaapi.fopenWT(path)
idaapi.gen_file(idaapi.OFILE_LST, ida_fd, imagebase, imagebase+0x20, idaapi.GENFLG_GENHTML)
# IDA 9.x SDK fix: removed `idaapi.eclose`, added `ida_fpro.qfclose`
if int(idaapi.get_kernel_version()[0]) >= 9:
import ida_fpro
ida_fpro.qfclose(ida_fd)
else:
idaapi.eclose(ida_fd)
# read the dumped text
with open(path, "r") as fd:
html = fd.read()
# delete the temp file from disk
try:
os.remove(path)
except OSError:
pass
# attempt to parse the user's disassembly background color from the html (7.0?)
bg_color_text = get_string_between(html, '<body bgcolor="', '">')
if bg_color_text:
logger.debug(" - Extracted bgcolor '%s' from regex!" % bg_color_text)
return QtGui.QColor(bg_color_text)
#
# sometimes the above one isn't present... so try this one (7.1 - 7.4 maybe?)
#
# TODO: IDA 7.5 says c1 is /* line-fg-default */ ... but it's possible c1
# had the bg color of the line in other builds of 7.x? I'm not sure but
# this should be double checked at some point and can maybe just be removed
# in favor of c41 (line-bg-default) as that's what we really want
#
bg_color_text = get_string_between(html, '.c1 \{ background-color: ', ';')
if bg_color_text:
logger.debug(" - Extracted background-color '%s' from line-fg-default!" % bg_color_text)
return QtGui.QColor(bg_color_text)
# -- IDA 7.5 says c41 is /* line-bg-default */, a.k.a the bg color for disassembly text
bg_color_text = get_string_between(html, '.c41 \{ background-color: ', ';')
if bg_color_text:
logger.debug(" - Extracted background-color '%s' from line-bg-default!" % bg_color_text)
return QtGui.QColor(bg_color_text)
logger.debug(" - HTML color regex failed...")
logger.debug(html)
return None
def _get_ida_bg_color_from_view(self):
"""
Get the background color of the IDA disassembly views via widget inspection.
"""
logger.debug("Attempting to get IDA disassembly background color from view...")
names = ["Enums", "Structures"]
names += ["Hex View-%u" % i for i in range(5)]
names += ["IDA View-%c" % chr(ord('A') + i) for i in range(5)]
# find a form (eg, IDA view) to analyze colors from
for window_name in names:
twidget = idaapi.find_widget(window_name)
if twidget:
break
else:
logger.debug(" - Failed to find donor view...")
return None
# touch the target form so we know it is populated
self._touch_ida_window(twidget)
# IDA 9.2 SDK fix: migrate from PyQt5 to PySide6
# https://docs.hex-rays.com/user-guide/plugins/migrating-pyqt5-code-to-pyside6
if USING_PYSIDE6:
from shiboken6 import wrapInstance
else:
import sip
def wrapInstance(ptr, base=None):
return sip.wrapinstance(int(ptr), base)
widget = wrapInstance(int(twidget), QtWidgets.QWidget)
pixmap = widget.grab(QtCore.QRect(0, 10, widget.width(), 1))
# convert the raw pixmap into an image (easier to interface with)
image = QtGui.QImage(pixmap.toImage())
# return the predicted background color
return QtGui.QColor(predict_bg_color(image))
def _touch_ida_window(self, target):
"""
Touch a window/widget/form to ensure it gets drawn by IDA.
XXX/HACK:
We need to ensure that widget we will analyze actually gets drawn
so that there are colors for us to steal.
To do this, we switch to it, and switch back. I tried a few different
ways to trigger this from Qt, but could only trigger the full
painting by going through the IDA routines.
"""
# get the currently active widget/form title (the form itself seems transient...)
twidget = idaapi.get_current_widget()
title = idaapi.get_widget_title(twidget)
# touch the target window by switching to it
idaapi.activate_widget(target, True)
flush_qt_events()
# locate our previous selection
previous_twidget = idaapi.find_widget(title)
# return us to our previous selection
idaapi.activate_widget(previous_twidget, True)
flush_qt_events()
#------------------------------------------------------------------------------
# Disassembler Context API (database-specific)
#------------------------------------------------------------------------------
class IDAContextAPI(DisassemblerContextAPI):
def __init__(self, dctx):
super(IDAContextAPI, self).__init__(dctx)
@property
def busy(self):
return not(idaapi.auto_is_ok())
#--------------------------------------------------------------------------
# API Shims
#--------------------------------------------------------------------------
@IDACoreAPI.execute_read
def get_current_address(self):
return idaapi.get_screen_ea()
def get_database_directory(self):
return idautils.GetIdbDir()
def get_function_addresses(self):
return list(idautils.Functions())
def get_function_name_at(self, address):
return idaapi.get_short_name(address)
def get_function_raw_name_at(self, function_address):
return idaapi.get_name(function_address)
def get_imagebase(self):
return idaapi.get_imagebase()
def get_root_filename(self):
return idaapi.get_root_filename()
def navigate(self, address):
return idaapi.jumpto(address)
def navigate_to_function(self, function_address, address):
return self.navigate(address)
def set_function_name_at(self, function_address, new_name):
idaapi.set_name(function_address, new_name, idaapi.SN_NOWARN)
#--------------------------------------------------------------------------
# Hooks API
#--------------------------------------------------------------------------
def create_rename_hooks(self):
return RenameHooks()
#------------------------------------------------------------------------------
# Function Prefix API
#------------------------------------------------------------------------------
PREFIX_SEPARATOR = "%"
#------------------------------------------------------------------------------
# Hooking
#------------------------------------------------------------------------------
class RenameHooks(idaapi.IDB_Hooks):
def renamed(self, address, new_name, local_name):
"""
Capture all IDA rename events.
"""
# we should never care about local renames (eg, loc_40804b), ignore
if local_name or new_name.startswith("loc_"):
return 0
rendered_name = idaapi.get_short_name(address)
# call the 'renamed' callback, that will get hooked by a listener
self.name_changed(address, rendered_name)
# must return 0 to keep IDA happy...
return 0
def name_changed(self, address, new_name):
"""
A placeholder callback, which will get hooked / replaced once live.
"""
pass
#------------------------------------------------------------------------------
# HexRays Util
#------------------------------------------------------------------------------
def hexrays_available():
"""
Return True if an IDA decompiler is loaded and available for use.
"""
try:
import ida_hexrays
return ida_hexrays.init_hexrays_plugin()
except ImportError:
return False
def map_line2citem(decompilation_text):
"""
Map decompilation line numbers to citems.
This function allows us to build a relationship between citems in the
ctree and specific lines in the hexrays decompilation text.
Output:
+- line2citem:
| a map keyed with line numbers, holding sets of citem indexes
|
| eg: { int(line_number): sets(citem_indexes), ... }
'
"""
line2citem = {}
#
# it turns out that citem indexes are actually stored inline with the
# decompilation text output, hidden behind COLOR_ADDR tokens.
#
# here we pass each line of raw decompilation text to our crappy lexer,
# extracting any COLOR_ADDR tokens as citem indexes
#
for line_number in xrange(decompilation_text.size()):
line_text = decompilation_text[line_number].line
line2citem[line_number] = lex_citem_indexes(line_text)
#logger.debug("Line Text: %s" % binascii.hexlify(line_text))
return line2citem
def map_line2node(cfunc, metadata, line2citem):
"""
Map decompilation line numbers to node (basic blocks) addresses.
This function allows us to build a relationship between graph nodes
(basic blocks) and specific lines in the hexrays decompilation text.
Output:
+- line2node:
| a map keyed with line numbers, holding sets of node addresses
|
| eg: { int(line_number): set(nodes), ... }
'
"""
line2node = {}
treeitems = cfunc.treeitems
function_address = cfunc.entry_ea
#
# prior to this function, a line2citem map was built to tell us which
# citems reside on any given line of text in the decompilation output.
#
# now, we walk through this line2citem map one 'line_number' at a time in
# an effort to resolve the set of graph nodes associated with its citems.
#
for line_number, citem_indexes in iteritems(line2citem):
nodes = set()
#
# we are at the level of a single line (line_number). we now consume
# its set of citems (citem_indexes) and attempt to identify explicit
# graph nodes they claim to be sourced from (by their reported EA)
#
for index in citem_indexes:
# get the code address of the given citem
try:
item = treeitems[index]
address = item.ea
# apparently this is a thing on IDA 6.95
except IndexError as e:
continue
# find the graph node (eg, basic block) that generated this citem
node = metadata.get_node(address)
# address not mapped to a node... weird. continue to the next citem
if not node:
#logger.warning("Failed to map node to basic block")
continue
#
# we made it this far, so we must have found a node that contains
# this citem. save the computed node_id to the list of known
# nodes we have associated with this line of text
#
nodes.add(node.address)
#
# finally, save the completed list of node ids as identified for this
# line of decompilation text to the line2node map that we are building
#
line2node[line_number] = nodes
# all done, return the computed map
return line2node
def lex_citem_indexes(line):
"""
Lex all ctree item indexes from a given line of text.
The HexRays decompiler output contains invisible text tokens that can
be used to attribute spans of text to the ctree items that produced them.
This function will simply scrape and return a list of all the these
tokens (COLOR_ADDR) which contain item indexes into the ctree.
"""
i = 0
indexes = []
line_length = len(line)
# lex COLOR_ADDR tokens from the line of text
while i < line_length:
# does this character mark the start of a new COLOR_* token?
if line[i] == idaapi.COLOR_ON:
# yes, so move past the COLOR_ON byte
i += 1
# is this sequence for a COLOR_ADDR?
if ord(line[i]) == idaapi.COLOR_ADDR:
# yes, so move past the COLOR_ADDR byte
i += 1
#
# A COLOR_ADDR token is followed by either 8, or 16 characters
# (a hex encoded number) that represents an address/pointer.
# in this context, it is actually the index number of a citem
#
citem_index = int(line[i:i+idaapi.COLOR_ADDR_SIZE], 16)
i += idaapi.COLOR_ADDR_SIZE
# save the extracted citem index
indexes.append(citem_index)
# skip to the next iteration as i has moved
continue
# nothing we care about happened, keep lexing forward
i += 1
# return all the citem indexes extracted from this line of text
return indexes
@@ -2,11 +2,8 @@ import os
import sys
import logging
import idaapi
#
# shamelessly ripped from Solidarity
#
from .misc import makedirs
from .disassembler import disassembler
#------------------------------------------------------------------------------
# Log / Print helpers
@@ -14,15 +11,15 @@ import idaapi
def lmsg(message):
"""
Print a message to the IDA output window, prefixed with [Lighthouse]
Print a message to the disassembler output window, prefixed with [Lighthouse]
"""
# prefix the message
prefix_message = "[Lighthouse] %s" % message
# only print to IDA if the output window is alive
if idaapi.is_msg_inited():
print prefix_message
# only print to disassembler if its output window is alive
if disassembler.is_msg_inited():
disassembler.message(prefix_message)
else:
logger.info(message)
@@ -30,7 +27,11 @@ def get_log_dir():
"""
Return the Lighthouse log directory.
"""
return os.path.join(idaapi.get_user_idadir(), "lighthouse_logs")
log_directory = os.path.join(
disassembler.get_disassembler_user_directory(),
"lighthouse_logs"
)
return log_directory
def logging_started():
"""
@@ -54,7 +55,8 @@ class LoggerProxy(object):
def write(self, buf):
for line in buf.rstrip().splitlines():
self._logger.log(self._log_level, line.rstrip())
self._stream.write(buf)
if self._stream:
self._stream.write(buf)
def flush(self):
pass
@@ -66,7 +68,7 @@ class LoggerProxy(object):
# Initialize Logging
#------------------------------------------------------------------------------
MAX_LOGS = 5
MAX_LOGS = 10
def cleanup_log_directory(log_directory):
"""
Retain only the last 15 logs.
@@ -79,8 +81,8 @@ def cleanup_log_directory(log_directory):
if os.path.isfile(filepath):
filetimes[os.path.getmtime(filepath)] = filepath
# get the filetimes and check if there's enough enough to warrant cleanup
times = filetimes.keys()
# get the filetimes and check if there's enough to warrant cleanup
times = list(filetimes.keys())
if len(times) < MAX_LOGS:
return
@@ -115,8 +117,11 @@ def start_logging():
# create a directory for lighthouse logs if it does not exist
log_dir = get_log_dir()
if not os.path.exists(log_dir):
os.makedirs(log_dir)
try:
makedirs(log_dir)
except Exception as e:
logger.disabled = True
return logger
# construct the full log path
log_path = os.path.join(log_dir, "lighthouse.%s.log" % os.getpid())
@@ -124,7 +129,7 @@ def start_logging():
# config the logger
logging.basicConfig(
filename=log_path,
format='%(asctime)s | %(name)20s | %(levelname)7s: %(message)s',
format='%(asctime)s | %(name)28s | %(levelname)7s: %(message)s',
datefmt='%m-%d-%Y %H:%M:%S',
level=logging.DEBUG
)
@@ -1,9 +1,15 @@
import os
import re
import errno
import struct
import weakref
import datetime
import threading
import collections
import idaapi
from .shims import using_pyqt5, QtCore, QtGui, QtWidgets
from .python import *
BADADDR = 0xFFFFFFFFFFFFFFFF
#------------------------------------------------------------------------------
# Plugin Util
@@ -23,42 +29,71 @@ def plugin_resource(resource_name):
)
#------------------------------------------------------------------------------
# UI Util
# Thread Util
#------------------------------------------------------------------------------
def MonospaceFont():
def is_mainthread():
"""
Convenience alias for creating a monospace Qt font object.
Return a bool that indicates if this is the main application thread.
"""
font = QtGui.QFont("Monospace")
font.setStyleHint(QtGui.QFont.TypeWriter)
return font
return isinstance(threading.current_thread(), threading._MainThread)
def singleshot(ms, function=None):
def mainthread(f):
"""
A Qt Singleshot timer that can be stopped.
A debug decorator to ensure that a function is always called from the main thread.
"""
timer = QtCore.QTimer()
timer.setInterval(ms)
timer.setSingleShot(True)
timer.timeout.connect(function)
return timer
def wrapper(*args, **kwargs):
assert is_mainthread()
return f(*args, **kwargs)
return wrapper
def copy_to_clipboard(data):
def not_mainthread(f):
"""
Copy the given data (a string) to the user clipboard.
A debug decorator to ensure that a function is never called from the main thread.
"""
cb = QtWidgets.QApplication.clipboard()
cb.clear(mode=cb.Clipboard)
cb.setText(data, mode=cb.Clipboard)
def wrapper(*args, **kwargs):
assert not is_mainthread()
return f(*args, **kwargs)
return wrapper
#------------------------------------------------------------------------------
# Theme Util
#------------------------------------------------------------------------------
def swap_rgb(i):
"""
Swap RRGGBB (integer) to BBGGRR.
"""
return struct.unpack("<I", struct.pack(">I", i))[0] >> 8
def test_color_brightness(color):
"""
Test the brightness of a color.
"""
if color.lightness() > 255.0/2:
return "light"
else:
return "dark"
#------------------------------------------------------------------------------
# Python Util
#------------------------------------------------------------------------------
def makedirs(path, exists_ok=True):
"""
Make a fully qualified path.
"""
try:
os.makedirs(path)
except OSError as e:
if e.errno != errno.EEXIST:
raise e
if not exists_ok:
raise e
def chunks(l, n):
"""
Yield successive n-sized chunks from l.
Yield successive n-sized chunks from a list (l).
From http://stackoverflow.com/a/312464
"""
@@ -67,15 +102,36 @@ def chunks(l, n):
def hex_list(items):
"""
Return a string of a python-like list string, with hex numbers.
Return a string of a python-like list, with hex numbers.
[0, 5420, 1942512] --> '[0x0, 0x152C, 0x1DA30]'
"""
return '[{}]'.format(', '.join('0x%X' % x for x in items))
def human_timestamp(timestamp):
"""
Return a human readable timestamp for a given epoch.
"""
dt = datetime.datetime.fromtimestamp(timestamp)
return dt.strftime("%b %d %Y %H:%M:%S")
def get_string_between(text, before, after):
"""
Get the string between two strings.
"""
pattern = "%s(.*)%s" % (before, after)
result = re.search(pattern, text)
if not result:
return None
return result.group(1)
#------------------------------------------------------------------------------
# Python Callback / Signals
#------------------------------------------------------------------------------
def register_callback(callback_list, callback):
"""
Register a given callable (callback) to the given callback_list.
Register a callable function to the given callback_list.
Adapted from http://stackoverflow.com/a/21941670
"""
@@ -90,16 +146,19 @@ def register_callback(callback_list, callback):
# 'register' the callback
callback_list.append(callback_ref)
return callback_ref
def notify_callback(callback_list):
def notify_callback(callback_list, *args):
"""
Notify the given list of registered callbacks.
Notify the given list of registered callbacks of an event.
The given list (callback_list) is a list of weakref'd callables
registered through the _register_callback function. To notify the
callbacks we simply loop through the list and call them.
registered through the register_callback() function. To notify the
callbacks of an event, this function will simply loop through the list
and call them.
This routine self-heals by removing dead callbacks for deleted objects.
This routine self-heals by removing dead callbacks for deleted objects as
it encounters them.
Adapted from http://stackoverflow.com/a/21941670
"""
@@ -128,9 +187,9 @@ def notify_callback(callback_list):
# call the object instance callback
try:
callback(obj)
callback(obj, *args)
# assume a Qt cleanup/deletion occured
# assume a Qt cleanup/deletion occurred
except RuntimeError as e:
cleanup.append(callback_ref)
continue
@@ -144,100 +203,18 @@ def notify_callback(callback_list):
continue
# call the static callback
callback()
callback(*args)
# remove the deleted callbacks
for callback_ref in cleanup:
callback_list.remove(callback_ref)
#------------------------------------------------------------------------------
# Coverage Util
#------------------------------------------------------------------------------
def coalesce_blocks(blocks):
def unregister_callback(callback_list, callback_ref):
"""
Coalesce a list of (address, size) blocks.
----------------------------------------------------------------------
Example:
blocks = [
(4100, 10),
(4200, 100),
(4300, 10),
(4310, 20),
(4400, 10),
]
Returns:
coalesced = [(4100, 10), (4200, 130), (4400, 10)]
Remove a previously-registered callback reference.
"""
# nothing to do
if not blocks:
return []
elif len(blocks) == 1:
return blocks
# before we can operate on the blocks, we must ensure they are sorted
blocks = sorted(blocks)
#
# coalesce the list of given blocks
#
coalesced = [blocks.pop(0)]
while blocks:
block_start, block_size = blocks.pop(0)
#
# compute the end address of the current coalescing block. if the
# blocks do not overlap, create a new block to start coalescing from
#
if sum(coalesced[-1]) < block_start:
coalesced.append((block_start, block_size))
continue
#
# the blocks overlap, so update the current coalescing block
#
coalesced[-1] = (coalesced[-1][0], (block_start+block_size) - coalesced[-1][0])
# return the list of coalesced blocks
return coalesced
def rebase_blocks(base, basic_blocks):
"""
Rebase a list of basic blocks (address, size) to the given base.
"""
return map(lambda x: (base + x[0], x[1]), basic_blocks)
def build_hitmap(data):
"""
Build a hitmap from the given list of address.
A hitmap is a map of address --> number of executions.
The list of input addresses can be any sort of runtime trace, coverage,
or profiiling data that one would like to build a hitmap for.
"""
output = collections.defaultdict(int)
# if there is no input data, simply return an empty hitmap
if not data:
return output
#
# walk through the given list of given addresses and build a
# corresponding hitmap for them
#
for address in data:
output[address] += 1
# return the hitmap
return output
try:
callback_list.remove(callback_ref)
except (ValueError, AttributeError):
pass
+69
View File
@@ -0,0 +1,69 @@
import sys
import operator
#------------------------------------------------------------------------------
# Python 2/3 Compatibilty Shims
#------------------------------------------------------------------------------
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
#
# xrange shim
#
if PY3:
xrange = range # is this bad lol
#
# Queue --> queue shim
#
try:
import Queue as queue
except:
import queue
#
# iter* shims by Benjamin Peterson, from https://github.com/benjaminp/six
#
if PY3:
def iterkeys(d, **kw):
return iter(d.keys(**kw))
def itervalues(d, **kw):
return iter(d.values(**kw))
def iteritems(d, **kw):
return iter(d.items(**kw))
def iterlists(d, **kw):
return iter(d.lists(**kw))
viewkeys = operator.methodcaller("keys")
viewvalues = operator.methodcaller("values")
viewitems = operator.methodcaller("items")
else:
def iterkeys(d, **kw):
return d.iterkeys(**kw)
def itervalues(d, **kw):
return d.itervalues(**kw)
def iteritems(d, **kw):
return d.iteritems(**kw)
def iterlists(d, **kw):
return d.iterlists(**kw)
viewkeys = operator.methodcaller("viewkeys")
viewvalues = operator.methodcaller("viewvalues")
viewitems = operator.methodcaller("viewitems")
+6
View File
@@ -0,0 +1,6 @@
from .shim import *
if QT_AVAILABLE:
from .util import *
from .waitbox import WaitBox
+118
View File
@@ -0,0 +1,118 @@
#
# this global is used to indicate whether Qt bindings for python are present
# and available for use by Lighthouse.
#
QT_AVAILABLE = False
#------------------------------------------------------------------------------
# PyQt5 <--> PySide2 Compatibility
#------------------------------------------------------------------------------
#
# we use this file to shim/re-alias a few Qt API's to ensure compatibility
# between the popular Qt frameworks. these shims serve to reduce the number
# of compatibility checks in the plugin code that consumes them.
#
# this file was critical for retaining compatibility with Qt4 frameworks
# used by IDA 6.8/6.95, but it less important now. support for Qt 4 and
# older versions of IDA (< 7.0) were deprecated in Lighthouse v0.9.0
#
USING_PYQT5 = False
USING_PYSIDE2 = False
USING_PYSIDE6 = False
#
# TODO/QT: This file is getting pretty gross. this whole shim system
# should probably get refactored as I really don't want disassembler
# specific dependencies in here...
#
# IDA 9.2 SDK fix: migrate from PyQt5 to PySide6
# https://docs.hex-rays.com/user-guide/plugins/migrating-pyqt5-code-to-pyside6
try:
import idaapi
ver_major, ver_minor = map(int, idaapi.get_kernel_version().split("."))
USING_NEW_IDA = ver_major == 9 and ver_minor >= 2
USING_OLD_IDA = not(USING_NEW_IDA)
except ImportError:
USING_NEW_IDA = False
USING_OLD_IDA = False
try:
import binaryninjaui
USING_NEW_BINJA = "qt_major_version" in binaryninjaui.__dict__ and binaryninjaui.qt_major_version == 6
USING_OLD_BINJA = not(USING_NEW_BINJA)
except ImportError:
USING_NEW_BINJA = False
USING_OLD_BINJA = False
#------------------------------------------------------------------------------
# PyQt5 Compatibility
#------------------------------------------------------------------------------
# attempt to load PyQt5 (IDA from 7.0 to 9.1)
if USING_OLD_IDA:
try:
import PyQt5.QtGui as QtGui
import PyQt5.QtCore as QtCore
import PyQt5.QtWidgets as QtWidgets
# importing went okay, PyQt5 must be available for use
QT_AVAILABLE = True
USING_PYQT5 = True
# import failed, PyQt5 is not available
except ImportError:
pass
#------------------------------------------------------------------------------
# PySide2 Compatibility
#------------------------------------------------------------------------------
# if PyQt5 did not import, try to load PySide2 (Old Binary Ninja / Cutter)
if not QT_AVAILABLE and USING_OLD_BINJA:
try:
import PySide2.QtGui as QtGui
import PySide2.QtCore as QtCore
import PySide2.QtWidgets as QtWidgets
# alias for less PySide2 <--> PyQt5 shimming
QtCore.pyqtSignal = QtCore.Signal
QtCore.pyqtSlot = QtCore.Slot
# importing went okay, PySide must be available for use
QT_AVAILABLE = True
USING_PYSIDE2 = True
# import failed. No Qt / UI bindings available...
except ImportError:
pass
#------------------------------------------------------------------------------
# PySide6 Compatibility
#------------------------------------------------------------------------------
# If all else fails, try to load PySide6 (New Binary Ninja and IDA)
if not QT_AVAILABLE and (USING_NEW_BINJA or USING_NEW_IDA):
try:
import PySide6.QtGui as QtGui
import PySide6.QtCore as QtCore
import PySide6.QtWidgets as QtWidgets
# alias for less PySide6 <--> PyQt5 shimming
QtCore.pyqtSignal = QtCore.Signal
QtCore.pyqtSlot = QtCore.Slot
QtWidgets.QAction = QtGui.QAction
# importing went okay, PySide must be available for use
QT_AVAILABLE = True
USING_PYSIDE6 = True
# import failed. No Qt / UI bindings available...
except ImportError:
pass
+368
View File
@@ -0,0 +1,368 @@
import sys
import time
import logging
import threading
from .shim import *
from ..misc import is_mainthread
from ..python import *
from ..disassembler import disassembler
logger = logging.getLogger("Lighthouse.Qt.Util")
#------------------------------------------------------------------------------
# Qt Fonts
#------------------------------------------------------------------------------
def MonospaceFont():
"""
Convenience alias for creating a monospace Qt font object.
"""
font = QtGui.QFont("Courier New")
font.setStyleHint(QtGui.QFont.Monospace)
return font
#------------------------------------------------------------------------------
# Qt Util
#------------------------------------------------------------------------------
def color_text(text, color):
"""
Return a colorized (HTML) version of the given string.
"""
return "<font color=\"%s\">%s</font>" % (color.name(), text)
def copy_to_clipboard(data):
"""
Copy the given data (a string) to the system clipboard.
"""
cb = QtWidgets.QApplication.clipboard()
cb.clear(mode=QtGui.QClipboard.Mode.Clipboard)
cb.setText(data, mode=QtGui.QClipboard.Mode.Clipboard)
def flush_qt_events():
"""
Flush the Qt event pipeline.
"""
app = QtCore.QCoreApplication.instance()
app.processEvents()
def get_qt_icon(name):
"""
Get a standard Qt icon by name.
"""
icon_type = getattr(QtWidgets.QStyle, name)
return QtWidgets.QApplication.style().standardIcon(icon_type)
def get_default_font_size():
"""
Get the default font size for this QApplication.
"""
return QtGui.QFont().pointSizeF()
def get_dpi_scale():
"""
Get a DPI-afflicted value useful for consistent UI scaling.
"""
font = MonospaceFont()
font.setPointSize(normalize_to_dpi(120))
fm = QtGui.QFontMetricsF(font)
# xHeight is expected to be 40.0 at normal DPI
return fm.height() / 173.0
def compute_color_on_gradient(percent, color1, color2):
"""
Compute the color specified by a percent between two colors.
"""
r1, g1, b1, _ = color1.getRgb()
r2, g2, b2, _ = color2.getRgb()
# compute the new color across the gradient of color1 -> color 2
r = r1 + int(percent * (r2 - r1))
g = g1 + int(percent * (g2 - g1))
b = b1 + int(percent * (b2 - b1))
# return the new color
return QtGui.QColor(r,g,b)
def move_mouse_event(mouse_event, position):
"""
Move the given mouse event to a different position.
"""
new_event = QtGui.QMouseEvent(
mouse_event.type(),
position,
mouse_event.button(),
mouse_event.buttons(),
mouse_event.modifiers()
)
return new_event
def normalize_to_dpi(font_size):
"""
Normalize the given font size based on the system DPI.
"""
if sys.platform == "darwin": # macos is lame
return font_size + 2
return font_size
def prompt_string(label, title, default=""):
"""
Prompt the user with a dialog to enter a string.
This does not block the IDA main thread (unlike idaapi.askstr)
"""
dpi_scale = get_dpi_scale()
dlg = QtWidgets.QInputDialog(None)
dlg.setWindowFlags(dlg.windowFlags() & ~QtCore.Qt.WindowContextHelpButtonHint)
dlg.setInputMode(QtWidgets.QInputDialog.TextInput)
dlg.setLabelText(label)
dlg.setWindowTitle(title)
dlg.setTextValue(default)
dlg.resize(
int(dpi_scale*400),
int(dpi_scale*50)
)
dlg.setModal(True)
dlg.show()
dlg.setFocus(QtCore.Qt.PopupFocusReason)
ok = dlg.exec_()
text = str(dlg.textValue())
return (ok, text)
def predict_bg_color(image):
"""
Predict the 'background color' of a given image.
This function takes an image, and analyzes its first row of pixels. It
will return the color that it believes to be the 'background color' based
on the longest sequence of identical pixels.
"""
assert image.width() and image.height()
# the details for the longest known color streak will be saved in these
longest = 1
speculative_bg = image.pixel(0, 0)
# this will be the computed length of the current color streak
sequence = 1
# find the longest streak of color in a single pixel slice
for x in xrange(1, image.width()):
# the color of this pixel matches the last pixel, extend the streak count
if image.pixel(x, 0) == image.pixel(x-1,0):
sequence += 1
#
# this catches the case where the longest color streak is in fact
# the last one. this ensures the streak color will get saved.
#
if x != image.width():
continue
# color change, determine if this was the longest continuous color streak
if sequence > longest:
# save the last pixel as the longest sequence / most likely BG color
longest = sequence
speculative_bg = image.pixel(x-1, 0)
# reset the sequence counter
sequence = 1
# return the color we speculate to be the background color
return speculative_bg
def remap_key_event(event, new_key):
"""
Change a given KeyPress QEvent to a different key.
"""
return QtGui.QKeyEvent(
QtCore.QEvent.KeyPress,
new_key,
event.modifiers(),
event.text(),
event.isAutoRepeat(),
event.count()
)
def singleshot(ms, function=None):
"""
A Qt Singleshot timer that can be stopped.
"""
timer = QtCore.QTimer()
timer.setInterval(ms)
timer.setSingleShot(True)
timer.timeout.connect(function)
return timer
#------------------------------------------------------------------------------
# Async Util
#------------------------------------------------------------------------------
def await_future(future):
"""
Wait for a queue (future) message without blocking the main (Qt) thread.
This is effectively a technique I use to get around completely blocking
IDA's mainthread while waiting for a threaded result that may need to make
use of the execute_sync operators.
Waiting for a 'future' thread result to come through via this function
lets other execute_sync actions to slip through (at least Read, Fast).
"""
interval = 0.02 # the interval which we wait for a response
# run until the message arrives through the future (a queue)
while True:
# block for a brief period to see if the future completes
try:
return future.get(timeout=interval)
#
# the future timed out, so perhaps it is blocked on a request
# to the mainthread. flush the requests now and try again
#
except queue.Empty as e:
pass
logger.debug("Awaiting future...")
#
# if we are executing (well, blocking) as the main thread, we need
# to flush the event loop so IDA does not hang
#
if QT_AVAILABLE and is_mainthread():
flush_qt_events()
def await_lock(lock):
"""
Wait for a lock without blocking the main (Qt) thread.
See await_future() for more details.
"""
elapsed = 0 # total time elapsed waiting for the lock
interval = 0.02 # the interval (in seconds) between acquire attempts
timeout = 60.0 # the total time allotted to acquiring the lock
end_time = time.time() + timeout
# wait until the lock is available
while time.time() < end_time:
#
# attempt to acquire the given lock without blocking (via 'False').
# if we successfully acquire the lock, then we can return (success)
#
if lock.acquire(False):
logger.debug("Acquired lock!")
return
#
# the lock is not available yet. we need to sleep so we don't choke
# the cpu, and try to acquire the lock again next time through...
#
logger.debug("Awaiting lock...")
time.sleep(interval)
#
# if we are executing (well, blocking) as the main thread, we need
# to flush the event loop so IDA does not hang
#
if QT_AVAILABLE and is_mainthread():
flush_qt_events()
#
# we spent 60 seconds trying to acquire the lock, but never got it...
# to avoid hanging IDA indefinitely (or worse), we abort via signal
#
raise RuntimeError("Failed to acquire lock after %f seconds!" % timeout)
class QMainthread(QtCore.QObject):
"""
A Qt object whose sole purpose is to execute code on the mainthread.
"""
toMainthread = QtCore.pyqtSignal(object)
toMainthreadFast = QtCore.pyqtSignal(object)
def __init__(self):
super(QMainthread, self).__init__()
# helpers used to ensure thread safety
self._lock = threading.Lock()
self._fast_refs = []
self._result_queue = queue.Queue()
# signals used to communicate with the Qt mainthread
self.toMainthread.connect(self._execute_with_result)
self.toMainthreadFast.connect(self._execute_fast)
#--------------------------------------------------------------------------
# Public
#--------------------------------------------------------------------------
def execute(self, function):
"""
Execute a function on the mainthread and wait for its return value.
This function is safe to call from any thread, at any time.
"""
# if we are already on the mainthread, execute the callable inline
if is_mainthread():
return function()
# execute the callable on the mainthread and wait for it to complete
with self._lock:
self.toMainthread.emit(function)
result = self._result_queue.get()
# return the result of executing on the mainthread
return result
def execute_fast(self, function):
"""
Execute a function on the mainthread without waiting for completion.
"""
#
# append the given function to a reference list.
#
# I do this because I am not confident python / qt will guarantee the
# lifetime of the callable (function) as we cross threads and the
# callee scope/callstack dissolves away from beneath us
#
# this callable will be deleted from the ref list in _excute_fast()
#
self._fast_refs.append(function)
# signal to the mainthread that a new function is ready to execute
self.toMainthreadFast.emit(function)
#--------------------------------------------------------------------------
# Internal
#--------------------------------------------------------------------------
def _execute_with_result(self, function):
try:
self._result_queue.put(function())
except Exception as e:
logger.exception("QMainthread Exception")
self._result_queue.put(None)
def _execute_fast(self, function):
function()
self._fast_refs.remove(function)
qt_mainthread = QMainthread()
+102
View File
@@ -0,0 +1,102 @@
from .shim import *
from .util import get_dpi_scale
import logging
logger = logging.getLogger("Lighthouse.Qt.WaitBox")
#--------------------------------------------------------------------------
# Qt WaitBox
#--------------------------------------------------------------------------
class WaitBox(QtWidgets.QDialog):
"""
A Generic Qt WaitBox Dialog.
"""
def __init__(self, text, title="Please wait...", abort=None):
super(WaitBox, self).__init__()
# dialog text & window title
self._text = text
self._title = title
# abort routine (optional)
self._abort = abort
# initialize the dialog UI
self._ui_init()
def set_text(self, text):
"""
Change the waitbox text.
"""
self._text = text
self._text_label.setText(text)
qta = QtCore.QCoreApplication.instance()
qta.processEvents()
def show(self, modal=True):
self.setModal(modal)
result = super(WaitBox, self).show()
qta = QtCore.QCoreApplication.instance()
qta.processEvents()
#--------------------------------------------------------------------------
# Initialization - UI
#--------------------------------------------------------------------------
def _ui_init(self):
"""
Initialize UI elements.
"""
self.setWindowFlags(
self.windowFlags() & ~QtCore.Qt.WindowContextHelpButtonHint
)
self.setWindowFlags(
self.windowFlags() | QtCore.Qt.MSWindowsFixedSizeDialogHint
)
self.setWindowFlags(
self.windowFlags() & ~QtCore.Qt.WindowCloseButtonHint
)
# configure the main widget / form
self.setSizeGripEnabled(False)
self.setModal(True)
self._dpi_scale = get_dpi_scale()*5
# initialize abort button
self._abort_button = QtWidgets.QPushButton("Cancel")
# layout the populated UI just before showing it
self._ui_layout()
def _ui_layout(self):
"""
Layout the major UI elements of the widget.
"""
self.setWindowTitle(self._title)
self._text_label = QtWidgets.QLabel(self._text)
self._text_label.setAlignment(QtCore.Qt.AlignHCenter)
# vertical layout (whole widget)
v_layout = QtWidgets.QVBoxLayout()
v_layout.setAlignment(QtCore.Qt.AlignCenter)
v_layout.addWidget(self._text_label)
if self._abort:
self._abort_button.clicked.connect(self._abort)
v_layout.addWidget(self._abort_button)
v_layout.setSpacing(int(self._dpi_scale*3))
v_layout.setContentsMargins(
int(self._dpi_scale*5),
int(self._dpi_scale),
int(self._dpi_scale*5),
int(self._dpi_scale)
)
# scale widget dimensions based on DPI
height = int(self._dpi_scale * 15)
self.setMinimumHeight(height)
# compute the dialog layout
self.setLayout(v_layout)
+63
View File
@@ -0,0 +1,63 @@
import re
import json
import logging
import threading
try:
from urllib2 import urlopen # Py2
except ImportError:
from urllib.request import urlopen # Py3
logger = logging.getLogger("Lighthouse.Util.Update")
#------------------------------------------------------------------------------
# Update Checking
#------------------------------------------------------------------------------
UPDATE_URL = "https://api.github.com/repos/gaasedelen/lighthouse/releases/latest"
def check_for_update(current_version, callback):
"""
Perform a plugin update check.
"""
update_thread = threading.Thread(
target=async_update_check,
args=(current_version, callback,),
name="Lighthouse UpdateChecker"
)
update_thread.start()
def async_update_check(current_version, callback):
"""
An async worker thread to check for an plugin update.
"""
logger.debug("Checking for update...")
current_version = "v" + current_version
try:
response = urlopen(UPDATE_URL, timeout=5.0)
html = response.read()
info = json.loads(html)
remote_version = info["tag_name"]
except Exception:
logger.debug(" - Failed to reach GitHub for update check...")
return
# convert version #'s to integer for easy compare...
version_remote = int(''.join(re.findall('\d+', remote_version)))
version_local = int(''.join(re.findall('\d+', current_version)))
# no updates available...
logger.debug(" - Local: '%s' vs Remote: '%s'" % (current_version, remote_version))
if version_local >= version_remote:
logger.debug(" - No update needed...")
return
# notify the user if an update is available
update_message = "An update is available for Lighthouse!\n\n" \
" - Latest Version: %s\n" % (remote_version) + \
" - Current Version: %s\n\n" % (current_version) + \
"Please go download the update from GitHub."
callback(update_message)
+27
View File
@@ -0,0 +1,27 @@
from lighthouse.util.log import logging_started, start_logging
from lighthouse.util.disassembler import disassembler
if not logging_started():
logger = start_logging()
#------------------------------------------------------------------------------
# Disassembler Agnonstic Plugin Loader
#------------------------------------------------------------------------------
logger.debug("Resolving disassembler platform for plugin...")
if disassembler.headless:
logger.info("Disassembler '%s' is running headlessly" % disassembler.NAME)
logger.info(" - Lighthouse is not supported in headless modes (yet!)")
elif disassembler.NAME == "IDA":
logger.info("Selecting IDA loader...")
from lighthouse.integration.ida_loader import *
elif disassembler.NAME == "BINJA":
logger.info("Selecting Binary Ninja loader...")
from lighthouse.integration.binja_loader import *
else:
raise NotImplementedError("DISASSEMBLER-SPECIFIC SHIM MISSING")
Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 214 KiB

After

Width:  |  Height:  |  Size: 1.4 MiB

Some files were not shown because too many files have changed in this diff Show More