Skip to main content
Page loaded
Skip to content

Use Screen Context with AI Commands on Mac

Toggle Screen Context on any command, and AI sees your screen alongside your text - for truly context-aware responses.

How It Works

  1. Install Echoo

    Download Echoo for free. Grant Screen Recording permission when prompted.

  2. Enable Screen Context on a command

    Open Echoo settings, find any command, and toggle Screen Context on. It works per-command.

  3. Select text and trigger the command

    Highlight text in any app and press the command shortcut. Echoo captures your screen alongside the text.

  4. Get context-aware AI responses

    AI sees both your text and your screen, giving responses that understand the full visual context.

Best Practices & Tips

Screen Context adds a visual dimension to AI commands. By capturing a screenshot of your active screen and sending it alongside your text selection, AI models can understand the full context of what you are working on. This produces dramatically more relevant and accurate responses.

The most impactful use of Screen Context is in code assistance. When you select an error message and trigger the Explain Error command with Screen Context enabled, the AI sees not just the error text but also the surrounding code, your IDE layout, open files, and terminal output. This additional visual context often provides enough information for the AI to pinpoint the exact cause and suggest a specific fix rather than a generic explanation.

Screen Context transforms how designers work with Echoo in tools like Figma or Sketch. Enable Screen Context on a "Suggest UX Copy" command, and the AI sees your design layout alongside any selected text. It can suggest copy that fits the visual hierarchy, matches the UI pattern, and considers the surrounding interface elements. This visual awareness produces significantly better UX writing suggestions.

For productivity and project management, Screen Context lets AI understand your workflow context. When writing a Slack message about a bug, enable Screen Context so the AI sees the bug report or error on your screen. When drafting an email about a design review, let the AI see the design you are discussing. The additional context makes AI-generated text more specific and relevant.

Enable Screen Context selectively on commands where visual context matters. Not every command benefits from a screenshot. Grammar correction and translation work fine without visual context. Code review, error explanation, design feedback, and context-dependent writing are where Screen Context adds the most value. Toggling it per command means you only send screenshots when they are genuinely useful.

Choose multimodal AI providers for Screen Context commands. OpenAI GPT-4o, Anthropic Claude, and Google Gemini all support image input alongside text. For local processing, LLaVA and other multimodal models run via Ollama. Note that image-capable models are required; text-only models will ignore the screenshot.

Pro Tips

1

Enable Screen Context selectively on commands where visual context matters, like code review and error explanation, rather than on every command.

2

For design work in Figma, enable Screen Context on a "Suggest UX Copy" command so the AI can see the design layout and suggest copy that fits the visual hierarchy.

3

Use Screen Context with the Explain Error command in your IDE so AI can see the surrounding code, file structure, and terminal output for more accurate debugging help.

4

Combine Screen Context with voice instruction (no text selected) for screen-only analysis, like asking AI to describe what it sees or suggest improvements to a visible design.

Who Uses This

Best AI Providers for This

Commands for This Use Case

Frequently Asked Questions

Related Articles

Explore More

Ready to Try It?

Download Echoo for free and start transforming text with AI shortcuts.