Refactoring Rust Without Knowing Rust: An AI-Assisted Workflow
I recently needed to refactor a Rust codebase within a Tauri project, but I didn’t learn the language’s intricacies. I solved this by decoupling the software engineering principles from the syntax, using an AI based workflow.
Here is the step-by-step process I used, with Grok, Cursor, and Codex CLI.
1. The Code Review (Grok Code Fast 1)
For the initial audit, I used Grok Code Fast 1 inside Cursor. I chose this model because it is fast and free, making it ideal for scanning a large codebase fast and cheap.
I asked Grok to perform a comprehensive review focusing on code quality standards, dead code, complexity, naming, and error handling. Nothing too fancy.
Prompt:
can you perform an extensive code review or /src-tauri . check for dead code, overly complex code, too long files, cyclo complexity, bad names, badly handled errors, silent errors, etc… be thorough but skip test code. then write a report to /docs/rust-review.md with your comments, all the problems you’ve found, etc…
This generated a rust-review.md file listing specific issues as requested.
2. Creating an Execution Plan (OpenSpec)
A raw list of problems is difficult to execute atomically. To ensure the refactor was systematic and implementable, I asked Grok to convert the review findings into an OpenSpec change.
Prompt:
please create an openspec change for all of these changes. make sure to be explicit so that implementing the spec is easy
This translated the qualitative feedback into a structured specification with explicit, actionable steps.
3. Implementation (Codex CLI)
With the specification ready, I used codex-cli to handle the actual coding. Rather than feeding the entire spec at once, I processed it in small batches.
For each batch, I ran the compiler and integration tests immediately after. This closed the feedback loop: the compiler handled syntax verification, while the tests ensured logical integrity.
4. Verification and Results
To validate the refactor at the end, I fed the code back to Grok for a final assessment.
Prompt:
you’ve performed a @docs/rust-review.md and I’ve implemented these recommendations… Can you please review the code now and tell me what you think of the code quality?
The final review confirmed a significant improvement in code health. The overall quality score increased from 6.7/10 to 8.2/10 as measured by Grok.
Key metrics included:
- File Organization: drastically improving maintainability.
- Error Handling: Removed silent failures, implemented proper logging and centralized error propagation.
- Architecture: Implemented a shared HTTP client with connection pooling
- Test Coverage: new tests were even added.
Grok noted that the codebase now follows Rust best practices and is structured for a production environment. By leveraging AI to handle language-specific details, I was able to focus on architectural improvements, resulting in a cleaner, more robust codebase.
Is the result objectively better? Yes it is, because it satisfies my own engineering standards. Would a human engineer have done a better job? Who cares at this point? I don’t. Good Engineering was the goal, and achieving that without knowing Rust was gold.
Photo by Dmitriy Demidov on Unsplash