Why this exists, and why it is only a proposal generator.
Most AI writing tools try to do everything and end up sounding the same across every job. Proposal Ace makes the opposite bet. One form, one audience, one purpose.
There is a joke among freelancers that AI has made pitching both easier and worse. Easier, because you can generate a proposal in five seconds. Worse, because every single one of those generated proposals sounds identical to every other generated proposal on the platform. Clients have learned the signs. They can spot an AI-written opener from across the room, and once they do, your application goes to the bottom of the pile. The tool that was supposed to save you time ends up costing you interviews.
Proposal Ace started as a reaction to that pattern. The hypothesis was simple. A general AI writing tool has to be safe, pleasant, and broadly applicable, which pushes every output toward the mean. A narrow AI tool that does exactly one thing can be tuned aggressively for that one thing, without worrying about breaking anything else. And the thing freelancers needed most was a way to get specific, human-sounding proposals at the speed they were used to getting bland, forgettable ones.
The problem we kept seeing
Every freelancer who has spent time on Upwork, Fiverr, Contra, or any of the newer platforms has lived the same experience. You find a job you want. You open a blank box. You stare at it. You type Hi, then delete it. You type Hello, then delete that. Twenty minutes later you have written three paragraphs, none of which feel right, and you send it anyway because the clock is running. The proposal gets no reply. You tell yourself the client was overwhelmed with applications. Next job, same thing.
The real issue is not that you cannot write. The real issue is that writing a good proposal from scratch is a surprisingly hard creative exercise, and nobody has infinite creativity to spend on the eighth application of the day. The template solution is fast but tanks your reply rate. The from-scratch solution is effective but slow, which means you send fewer applications, which means you win fewer gigs even when your individual reply rate is high. The whole industry is stuck on that tradeoff.
A general-purpose AI tool does not fix this, because it does not know what a proposal is supposed to do. It knows how to write pleasant English. It does not know that the first sentence has to be unforgeable. It does not know that clients scan for specifics and skip over adjectives. It does not know that ending with a question beats ending with a statement. It writes something that is grammatically fine and conversionally dead.
The bet behind the product
The bet is that a narrow tool with a heavily tuned system prompt can outperform a general tool by a wide margin on a single task. Not because the underlying model is better. It is the same model, or sometimes a slightly different version of the same family. The difference is the instruction set. Proposal Ace tells the model exactly what a good proposal looks like, what phrases to avoid, what structure to follow, what tone to match, and what tradeoffs to make when the inputs are thin.
The system prompt has been through dozens of revisions. Earlier versions produced proposals that were too formal. The next batch was too casual. Then too long, then too short. The current version sits in a sweet spot that produces drafts most freelancers can send after a single light edit. That is the goal. Not perfect output, which is impossible without knowing the freelancer personally, but draft-quality output that needs only a human touch on top.
The tool deliberately does not do other things. It does not write emails. It does not draft LinkedIn messages. It does not help you with your portfolio. It does not save your drafts or track your pipeline. Every feature we chose not to build is a feature that would have pulled the product away from the thing it is actually good at. A narrow tool stays sharp because it has nowhere to hide.
How we think about quality
Quality in a proposal is measurable in exactly one way, the reply rate. You send a proposal, the client either replies or does not. Over enough samples you get a signal. Our goal from day one was to push the average user's reply rate up by at least a factor of two compared to their template-based baseline. The internal benchmarks are imperfect because we do not have access to platform data, but freelancers who have tested the tool on a matched sample of jobs consistently report a meaningful lift. That is the only metric that matters. Time saved is a bonus, but if the proposals did not convert, the time saved would be worthless.
We also think about quality in terms of how little editing the output needs. A tool that produces a draft requiring fifteen minutes of rewriting has only partially solved the problem. A tool that produces a draft requiring one minute of editing has actually solved it. Every revision of the prompt has been optimized to reduce that final editing time. When we see the same edit recurring in user testing, that edit becomes part of the default output.
Who we built this for
The user we keep in mind is a mid-career freelancer. Two to eight years in. Actually competent at the work, not a beginner. Sending somewhere between five and thirty proposals a week. Reasonably picky about fit but frustrated that their reply rate does not reflect their skill. Usually specialized but not famous. Makes a living, wants to make a better living.
This is a specific profile, and it rules out some other users. We did not build this for the freelancer who sends two hundred applications a day hoping something sticks. For that user, the highest-leverage tool is not a better proposal, it is a better targeting filter. We did not build this for absolute beginners either. A beginner needs to write their own proposals for a while, badly at first, because the struggle teaches them what clients actually respond to. Handing a beginner an AI tool skips that learning and leaves them dependent.
The mid-career freelancer gets the most value because they already know what they are doing. They have tested what works. They have a voice. The tool amplifies that voice. They get to spend their creative energy on client work instead of on the hundredth proposal opener.
What we do not do with your data
The pitches you generate pass through the Anthropic API and are not stored on our servers. We do not keep a database of your job posts, your profiles, or your generated proposals. We do not train a model on your inputs. There are no accounts because there is no data to tie to an account. The privacy is a side effect of the product being narrow. If we wanted to offer drafts-saved or pipeline tracking, we would have to store things, and we would have to make hard decisions about how to handle that data. We chose not to take on those decisions.
This also means the tool cannot learn from your past proposals or get better as you use it more. Some users ask for that feature and it is a reasonable ask. The honest tradeoff is that a tool with memory is a tool with stored data, and stored data comes with its own costs. If we ever add memory, it will be opt-in, local to your browser, and fully under your control. For now, you get a stateless tool, which is a cleaner, faster experience for the vast majority of users.
The road from here
The next things we are thinking about are narrow extensions of the current thing, not new products. A slightly smarter input that recognizes when a job post is thin and asks for specific clarifications. A platform-aware mode that shortens for Fiverr and lengthens for enterprise. A rewrite pass that takes an existing proposal you wrote and sharpens it instead of starting from scratch. Each of these has to pay for itself in reply-rate improvement, or we do not ship it. We have a list of features we have explicitly rejected for the same reason. A product stays good by saying no more often than yes.
In the meantime, the best thing you can do with the tool today is use it. Try it on ten jobs you were going to skip. Watch what the hook does when the job description is rich, and watch what it does when the description is thin. Learn when to generate and when to write from scratch. Over time, you will develop a sense of when this is the right tool and when your own voice, unassisted, is the better call. Both are good outcomes. The worst outcome is a freelancer who defaults to templates because writing from scratch feels hard. That is the outcome we are trying to make impossible.
One last note
Proposal Ace is free because the infrastructure cost of the API calls is low enough that we can absorb it for a long time. We are not running this as a growth play or a funnel to something else. If we ever need to charge, we will say so plainly and keep a generous free tier for solo freelancers. If we ever go away, we will let you know in advance. Meanwhile, the tool works, it is fast, and it is yours to use. Paste a job post and see what comes out.
What we learned from the first wave of users
The first hundred freelancers who used Proposal Ace taught us things we could not have predicted from inside the building. The first surprise was that the most popular tone was not the one we expected. We assumed Confident would dominate because the marketing of the tool leans in that direction. In practice, Warm won by a wide margin. Freelancers know their clients. Most of the work on platforms goes to warmth, not bravado. The people who are in the trenches every day were more right about this than our product intuition.
The second surprise was that users wanted shorter output more often than longer. Our original default length target was two hundred and fifty words. We pulled it down to one hundred and fifty after enough users told us the tool was producing pitches that felt longer than the clients they were writing to would bother to read. The honest lesson was that freelancers know the platforms better than a product team reading documentation. If a tool is built for a specific audience, that audience needs to be close to the design loop.
The third surprise was that the Copy button is a load-bearing feature. We almost shipped without it, on the theory that copying text from a browser is already easy. Users told us the button mattered because it made the product feel finished, like a professional tool rather than a demo. Small details of that kind do a disproportionate amount of the work in getting a product taken seriously. A proposal generator that looks like a hackathon project will not be used by a freelancer charging premium rates, because that freelancer's taste in software has to match their taste in the work they produce.
The philosophy of boring choices
A running joke on the team is that we keep making boring choices on purpose. We did not build a desktop app, we did not build a Chrome extension, we did not build an iOS companion, we did not add a referral program. Each of those would have been a fun project. Each of those would have split attention away from the one thing that matters. Boring choices compound. Every week the tool is not distracted by a side feature is a week we can spend making the core generation sharper, the page content better, or the product faster.
The most exciting product decisions are almost always the quietest ones. Rewriting a line in the system prompt that was pushing the model toward corporate language. Tweaking the tone copy so users pick the right one more often. Cutting half of a landing page because the short version converts better than the long one did. These kinds of changes do not make for good blog posts, but they move the thing that matters, which is how often the proposal a freelancer sends actually earns a reply.
If there is a single design principle behind the whole product, it is that narrowness is a strategy, not a limitation. A tool that does one thing well, in a world of tools that do many things poorly, earns a specific kind of trust that gets talked about. That talking about is most of our distribution, because we do not do paid acquisition. We exist in freelancer Discords, Slack communities, Twitter threads, and LinkedIn replies because the tool is small enough that someone can share it in one sentence and the person they share it with can try it in ten seconds. That works. Anything larger would not.
The generator on the homepage turns any job post into a proposal that actually reads like a human wrote it. Free, no signup, takes about six seconds.
Write my proposal →