When I’m doing freelance software development projects and I’m asked how much time things will take, it’s always very hard to answer. It’s a very high dimensional problem, and you have many degrees of freedom: how much should you charge, should you do it per hour or not, if you use AI how should it be factored in the efforts and time you report, what is the separation between the stuff that you charge and the stuff for which you don’t charge, does asking questions to your client to better understand the problem, on Slack in the weekend, count as time or not? Facing these doubts, and if they forgo the route of charging by the hour (which can be logistically quite painful, and can lead to its own kind of bad outcomes) I guess that most people are quite happy to fudge a rough number for a flat rate, do a couple of rounds of negotiation, reach an agreement, and call it a day. They implicitly accept that the outcome of this process can go both ways: either you estimated too low, and this way you will lose something, or the contrary. And you hope that in the grand scheme of things, things should average out in a reasonable way: either your loss, or their loss, who cares in the end? But is is really working this way?

There is also a certain aspect of “symmetry” in the incentive structure of the problem, that is in my case very high in my list of preoccupations. In some situations in life, your goal is simply to maximize your own advantages, and the structure of the situation is such that can you can be “greedy” about it. Say you are negotiating your future salary with a really big, soulless mega corporation for instance. There is no real incentive for you to not try to ask for the maximum you can get. If you do get it, you will not really “hurt” anyone, except maybe your coworker, who was hired to do the same thing as you, but got much less. If you are interviewing for a very small company, on the contrary, and you ask a super big salary, and you get it, maybe it will mean that the company owners will be in trouble, in the future, because of your excessive salary demand (that they accepted at the time, because you were able to convince them that you are a superstar). There is a sense, very often in life, where you want to “distribute honesty” on both sides of the equation, not just yours. In simpler terms, very often in life, you want to be nice and honest, because it’s the thing to do, simply. I’m not talking here about some kind of “second order” strategy, being nice, in the hope that, in the end, “nice people wins”. I’m just talking, plainly, about feeling good about yourself. And being maximally honest, to the point of being candid, plays a role in that, I believe.

That is the kind of considerations I’m ruminating when trying to come up with a reasonable amount for a project (some people would say that I overthink quite a lot, and they probably wouldn’t be so wrong), and recently I got an idea. I’m going to explain to an AI everything about the project, what needs to be done, how the current code works, etc. And then I’m going to ask the AI to give me an estimate of how much I should ask. So far so good. Boring even, right?

Here’s the idea: I will share not only the result of what the AI suggested (the detailed estimation) but also my prompt itself, so that the client can evaluate and validate, by itself, that what I asked the AI is honest and accurate. The client can understand and evaluate what I was thinking, literally.

I would like to call this idea: a Proof of Prompt (POP). A way to distribute and maximize honesty across the two sides of a transactional interaction (within yourself, and across the other actor). You are forced to be honest and transparent, because there is nowhere to hide.

Of course this choice of name is meant to evoke the idea of Proof of Work in cryptography, with which it shares some structural similarity: the concept of implementing the notion of trust in an algorithm, to transfer its moral weight onto a mechanical, seemingly more objective substrate.

Here’s another case where a variation on this idea could work. I am a university professor, and there is a moral panic, across the educational field, about the use of AI for doing student work. How can you know if what the student did is honest? How can you be fair? The solution with a Proof of Prompt is straightforward: you first acknowledge (and even recommend) the use of AI, because in many contexts, let’s face it, it’s the progressive way to go (this is debatable in itself of course). Then what you ask is not the result (the work) itself, but the conversation (the list of prompts and AI answers) that have led to the creation of the work. In other words, you ask for the socratic dialogue that the student had with the AI, and that is the “artifact” that you ultimately judge (or grade, if you must).

The Proof of Prompt is a way to “bake” the complexities of a process in an outcome, which is itself produced by an external entity (an AI) which, by definition, you can posit to be maximally honest and fair (although yes, that aspect is highly debatable, of course).