Skip to content

Commit 256dba4

Browse files
Included testing file for agent
1 parent 5b78ce7 commit 256dba4

File tree

3 files changed

+102
-0
lines changed

3 files changed

+102
-0
lines changed

README.md

Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
![Status](https://img.shields.io/badge/status-active-brightgreen)
2+
3+
# TechDocAgent
4+
5+
An agent that generates technical documentation from code.
6+
7+
## Overview
8+
9+
TechDocAgent is an innovative project designed to automate the creation of technical documentation directly from your codebase. By leveraging advanced language models and code analysis techniques, it aims to streamline the documentation process, ensuring your project's documentation is always up-to-date and comprehensive.
10+
11+
## Core Capabilities & Technologies
12+
13+
TechDocAgent leverages the following key technologies and capabilities:
14+
15+
* **Python:** The entire agent's logic, analysis components, and orchestration are built using Python.
16+
* **XML:** Utilized for specific aspects of documentation generation or internal data representation, ensuring structured and parseable output where required.
17+
18+
## Installation
19+
20+
To get TechDocAgent up and running, follow these steps:
21+
22+
1. **Clone the repository:**
23+
```bash
24+
git clone <repository_url_here>
25+
cd TechDocAgent
26+
```python
27+
*(Replace `<repository_url_here>` with the actual URL of the repository)*
28+
29+
2. **Install dependencies:**
30+
Navigate to the project's root directory and install all required Python packages using `pip`:
31+
```bash
32+
pip install -r requirements.txt
33+
```python
34+
35+
## Configuration
36+
37+
TechDocAgent relies on a Large Language Model (LLM) for its documentation generation capabilities. You will need to provide an API key for the chosen LLM, which is expected to be Google Gemini in this configuration.
38+
39+
* **`GEMINI_API_KEY`**: Set this environment variable to your actual Google Gemini API key.
40+
41+
Example (for bash/zsh):
42+
```bash
43+
export GEMINI_API_KEY="your_gemini_api_key_here"
44+
```python
45+
It's recommended to add this to your shell's profile file (e.g., `.bashrc`, `.zshrc`) or use a `.env` file with a package like `python-dotenv` for local development.
46+
47+
## Usage
48+
49+
Currently, the primary way to understand and interact with TechDocAgent's functionality is by exploring its comprehensive test suite. These tests serve as runnable examples demonstrating how different components of the agent work together.
50+
51+
* **See `test_analysis.py` for usage examples:** This file provides a good starting point to observe how code analysis, LLM interaction, and documentation generation steps are orchestrated. You can run individual tests or examine their structure to understand the agent's workflow.
52+
53+
To run the tests:
54+
```bash
55+
pytest
56+
```python
57+
or for a specific test file:
58+
```bash
59+
pytest test_analysis.py
60+
```python
61+
62+
## Test Suite Structure
63+
64+
The project's current development and testing efforts are organized around the following test files, which collectively demonstrate the agent's internal workings and capabilities:
65+
66+
* `test_analysis.py`: Contains tests related to the code analysis components, including parsing and understanding source code.
67+
* `test_chunking.py`: Focuses on testing the logic for breaking down code or documentation into manageable chunks for LLM processing.
68+
* `test_ingestion.py`: Tests the mechanisms for ingesting various forms of input data into the agent.
69+
* `test_llm.py`: Dedicated to testing interactions with the Large Language Model, ensuring correct prompting and response handling.
70+
* `test_llm_json.py`: Specifically tests scenarios where LLM interactions involve JSON-formatted inputs or outputs.
71+
* `test_pipeline.py`: Provides end-to-end tests for the entire documentation generation pipeline, from code input to final output.
72+
* `test_prompt_fill.py`: Tests the templating and filling mechanisms for constructing prompts sent to the LLM.

test_llm.py

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
from techdocagent.pipeline import process_codebase
2+
from techdocagent.prompts import fill_readme_prompt
3+
from techdocagent.llm import generate_documentation
4+
from techdocagent.output import output_documentation
5+
import os
6+
from dotenv import load_dotenv
7+
8+
if __name__ == "__main__":
9+
load_dotenv()
10+
# Ensure GOOGLE_API_KEY is set
11+
if not os.environ.get('GEMINI_API_KEY'):
12+
print("ERROR: Please set the GOOGLE_API_KEY environment variable before running this script.")
13+
exit(1)
14+
root = "."
15+
processed = process_codebase(root)
16+
prompt = fill_readme_prompt(processed, project_name="TechDocAgent", project_description="An agent that generates technical documentation from code.")
17+
print("\nSending prompt to Gemini...\n")
18+
doc = generate_documentation(prompt)
19+
output_documentation(doc, output_path="README.md")

test_pipeline.py

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
from techdocagent.pipeline import process_codebase
2+
3+
if __name__ == "__main__":
4+
root = "."
5+
results = process_codebase(root)
6+
for file_result in results:
7+
meta = file_result['file_metadata']
8+
print(f"\nFile: {meta['file_path']} ({meta['language']}, {meta['size_bytes']} bytes)")
9+
print(f"Chunks: {len(file_result['chunks'])}")
10+
for chunk in file_result['chunks']:
11+
print(f" - Type: {chunk['type']}, Name: {chunk['name']}, Lines: {chunk['start_line']}-{chunk['end_line']}")

0 commit comments

Comments
 (0)