The days of the once coveted role of fast engineer seem to be numbered. In July 2023, Anthropic made headlines with a $300,000 offer for fast-paced engineers, while entry-level positions commanded around $85,000 and senior positions averaged $200,000. But as AI systems evolve into agents powered by retrieval, memory, policies, application programming interfaces (APIs), and workflows, prompts are becoming just a cog in a larger machine.
Context engineering is already eclipsing rapid engineering, as is the volatile role of Chief AI Officer. But what role will humans play when context engineering itself is automated? . explains
Has rapid engineering already passed its peak?
Prompt engineers acted as translators between humans and early AI models, turning natural language questions, or prompts, into structured instructions that produced reliable results. For example, instead of one-off user questions, they could design a template that consistently summarizes legal contracts with the right tone and accuracy.
Their value was in combining strong writing and reasoning skills with some technical knowledge, such as APIs and frameworks such as LangChain. But as AI models matured and reasoned more sophisticatedly, turning them on became faster, cheaper, and more standardized. Companies now value system-level designs involving context, memory, retrieval capabilities, and workflows over pure rapid creation, reducing the exclusivity that once commanded six-figure salaries.
Does it make way for context engineering?
Context is the set of tokens that an LLM processes when generating a response. In English, a token is about 75% of each word. A context window is an AI model's short-term memory, or the number of tokens or chunks of text it can process at once. It contains previous posts, replies and the current search query. Once the limit is reached, older tokens will drop, which can reduce accuracy.
Larger context windows allow models to process more information and reasoning at once, but context engineering makes the difference between generic results and precise, relevant answers in complex tasks by deciding what goes into that memory. So the focus shifts from wordsmithing to managing tokens and context. For example, a travel booking agent can automatically include preferences, loyalty numbers and budget limits in its reasoning without the user having to retype them each time.
What happens to prompt engineers?
Rapid engineering isn't going away anytime soon. It will survive as a subset of the broader work of AI systems. However, as AI agents evolve, prompts will no longer be created manually for each task, but will be dynamically generated through context pipelines. The field is already shifting to “agent engineering” or “AI system design.” In these roles, cues remain crucial building blocks, but are embedded in larger contexts of retrieval, memory, workflows, and security layers.
This shift will reduce the need for specialized personnel merely refining directions. Just as HTML coders gave way to full-stack web developers, context engineering will absorb prompt engineering, leaving little room for prompt-only specialists as companies demand end-to-end systems expertise.
What are the dangers of automating such roles?
While automation improves efficiency, completely removing skilled engineers could weaken safety measures and creativity, both of which remain critical to ensuring reliable and ethical deployment of AI agents. Automating prompt and context design carries the risk of over-reliance on templates and frameworks without human oversight.
Poorly designed context pipelines can propagate bias, hallucinations (information recalled with confidence), or unsafe results at scale. For example, if a search system feeds outdated medical information into a healthcare chatbot, errors could be magnified for thousands of patients.
Now that context engineering is also being automated, what now?
Context engineering is increasingly automated by AI agents that handle memory, retrieval, tools, and workflows. Instead of relying on people to manually shape context, agents now summarize transcripts, prioritize the most important facts, and ignore noise themselves, like Microsoft's Copilot, which consolidates the most relevant meeting notes into a project briefing. As these systems evolve, they will unify text, images, audio, video, and structured data into richer context frameworks.
This shift opens the door for humans to act as AI system architects that define objectives, devise policies, and ensure ethics and governance behavior, while leaving the cognitive micromanagement to the machines.

















