DFIR-ORC

Forensics artefact collection tool for systems running Microsoft Windows

Multi-Cloud Open Source Self Hosted Only
Category Incident Response & Forensics
Community Stars 398
Last Commit 4 months ago
Last page update 19 days ago
Pricing Details Free and open-source
Target Audience Digital forensics professionals, incident responders, and security analysts.

DFIR-ORC manages collecting forensically relevant data from Microsoft Windows systems in a decentralized and scalable manner. This tool is designed to parse and collect artefacts such as the Master File Table (MFT), registry hives, and event logs, without analyzing the data or installing any programs on the target machines.

Technically, DFIR-ORC is a modular framework that allows users to build a custom binary by embedding other tools and configurations. It requires configuration files in XML format and the inclusion of external tools like Sysinternals and memory dumping utilities (e.g., DumpIt, WinPmem).

The build process involves using Microsoft Visual Studio (from 2017 to 2022) and CMake to compile both 32-bit and 64-bit versions of the tool for maximum compatibility. The build environment can be set up using Microsoft's developer virtual machines, and the tool supports various CMake options to customize the build, such as building vcpkg dependencies and enabling JSON support.

Operationally, DFIR-ORC is designed to have a low impact on production environments, allowing fine-tuning of resource usage through its configuration files. It collects data into a single directory and does not leave any residual programs on the target machines. However, it is important to note that the tool does not handle data analysis and is purely focused on data collection. Additionally, the tool can encrypt its archives using PKCS7, which can be decrypted using OpenSSL or a custom Python implementation due to OpenSSL's limitations with larger archives.

Key considerations include ensuring compliance with data protection regulations, such as GDPR, when handling the collected data, and regularly updating submodules to maintain the integrity of the tool. The tool's scalability and flexibility make it suitable for large-scale deployments, but it requires careful configuration and deployment scripts to ensure seamless operation.

Improve this page