Give your AI assistant eyes into your Android app
ComposeProof is an MCP server that lets AI coding assistants render Compose UI headlessly, inspect live devices, and verify screenshots — no emulator needed.
Works with any MCP client
The problem with AI + Android today
AI coding assistants are powerful at writing code, but blind when it comes to UI. The feedback loop is broken.
AI can read code but can't see the app
Your AI assistant writes Compose UI code but has zero visual feedback. It can't tell if a button is misaligned, a color is wrong, or a layout is broken.
Debugging is manual and slow
Run the app, navigate to the screen, check the output, copy the error, paste it back. Every iteration requires you as the middleman.
Visual testing needs human eyes
Screenshot tests exist, but setting them up is painful and reviewing diffs requires you to look. The AI should handle this end-to-end.
Two modes. Complete coverage.
ComposeProof works at build-time and run-time, giving AI assistants full visibility into your Compose UI.
Headless Rendering
Build-time / CI
Renders @Preview composables headlessly via Roborazzi. No device, no emulator — just PNGs the AI can see and verify.
Device Inspection
Run-time / Live
Connects to a live device via ADB. The AI can screenshot, tap, swipe, inspect UI trees, and navigate your app — closed-loop testing.
See it in action
One prompt, zero human intervention. AI builds the app, discovers UI patterns, tests interactions, and reports results.
Recorded on a Pixel 9 Pro Fold with StickerExplode demo app.
Build, deploy & launch
AI runs preflight to detect the device, picks the right Gradle variant, builds, installs, and launches — all from a single natural language prompt.
AI discovers the interaction model
The FAB tap fails, so the AI reads source code, discovers the FAB opens a bottom sheet sticker tray, and switches strategy. Every action auto-screenshots.
Drag testing with coordinate recalculation
ADB swipe doesn't trigger Compose gestures. The AI switches to raw touch events, detects resolution mismatch, recalculates coordinates, and successfully drags stickers.
Results & persistent screenshots
AI produces a structured test report (3/3 pass), notes z-ordering and animations. Every screenshot from the session is saved to disk for human review.
Full uncut session
6 minutes — from prompt to test report, no edits.
23 tools built. More coming.
Every tool is an MCP endpoint your AI can call. Headless rendering, device interaction, performance profiling, and more.
insights Project overview — preview count, golden coverage, device status
render Render any @Preview headlessly via Roborazzi → PNG
list_previews Discover all @Preview functions — file, line, params
verify Single-call PASS/FAIL: render + golden + accessibility
render_batch Render/verify multiple previews with compact summary
diff Golden management: verify, record, or update baselines
preflight Check device + app state — connected, installed, screen
inspect_ui_tree Dump live Compose/View hierarchy with a11y warnings
device_interact Tap, swipe, type, scroll — AI navigates the app
get_recomposition_stats Find recomposition hotspots via compiler metrics
take_device_screenshot Capture device screen, auto-saved to disk
build_and_deploy Gradle build + install APK on device
get_build_status Check build success and APK version match
get_network_logs Capture OkHttp HTTP traffic from logcat
manage_proxy Set/clear device HTTP proxy
get_feature_flags Read/write SharedPreferences
inspect_permissions Runtime permissions — granted, denied, rationale needed
inspect_process_lifecycle Activity/Fragment lifecycle states for all components
inspect_navigation_graph Navigation graph, back stack, deep link patterns
inspect_datastore Jetpack DataStore preferences — all keys and values
inspect_coroutine_state Active coroutines — state, dispatchers, job hierarchy
execute_deeplink Fire a deep link URI and report which handler resolved
simulate_process_death Recreate Activity to test save/restore state handling
audit_accessibility
planned
Full WCAG 2.1 audit: touch targets, contrast, focus
track_recompositions
planned
Count recompositions per composable per frame
analyze_stability
planned
Report stability classification of composable params
detect_memory_leaks
planned
LeakCanary heap analysis with reference chains
profile_startup
planned
Cold/warm/hot start breakdown with bottlenecks
test_edge_cases
planned
Simulate no network, low memory, date boundaries
Up and running in 3 steps
Zero-install architecture. No changes to your project's build files.
Install ComposeProof
One command via Homebrew. No build file changes, no Gradle plugin, no SDK integration needed.
brew tap aldefy/tap brew install composeproof
Add to your MCP config
Point your AI client at ComposeProof. Works with Claude Code, Gemini CLI, Cursor, Android Studio.
{
"mcpServers": {
"composeproof": {
"command": "composeproof"
}
}
} AI has eyes
Your AI assistant can now render previews, inspect live devices, verify screenshots, and iterate on UI autonomously.
You: "verify the login screen" AI: renders → verifies → PASS ✓
Roadmap
Five waves of capabilities, from core debugging to AI-powered autonomous testing.
Foundation
In progressinspect_navigation_graphinspect_coroutine_stateinspect_datastoreinspect_permissionsexecute_deeplinkinspect_process_lifecyclesimulate_process_death Compose Intelligence
Up nexttrack_recompositionsanalyze_stabilityinspect_compose_stateprofile_lazy_listsemantic_ui_query Advanced Debugging
Planneddetect_memory_leaksinspect_threadscapture_anr_traceinspect_work_managerinspect_crash_history AI-Powered Workflows
Plannedautonomous_explorationcorrelate_state_and_uidiff_app_stateverify_fixaudit_accessibility Testing & CI
Plannedtest_edge_casessnapshot_golden_imagerun_smoke_testverify_release_buildprofile_startup Get early access
ComposeProof is in active development. Join the waitlist and we'll notify you when it's ready.
No spam. Unsubscribe anytime.