Professional AI whisperers have launched a marketplace for DALL-E prompts
In the past few years, art made by programs like Midjourney and OpenAI’s DALL-E has gotten surprisingly compelling. These programs can translate a text prompt into literally (and controversially) award-winning art. As the tools get more sophisticated, those prompts have become a craft in their own right. And as with any other craft, some creators have started putting them up for sale.
PromptBase is at the center of the new trade in prompts for generating specific imagery from image generators, a kind of meta-art market. Launched earlier this summer to both intrigue and criticism, the platform lets “prompt engineers” sell text descriptions that reliably produce a certain art style or subject on a specific AI platform. When you buy the prompt, you get a string of words that you paste into Midjourney, DALL-E, or another system that you’ve got access to. The result (if it’s a good prompt) is a variation on a visual theme like nail art designs, anime pinups, or “futuristic succulents.”
The prompts are more complex than a few words of description. They include keywords describing the intended aesthetic, the important elements for a scene, and brackets where buyers can add their own variables to tailor the content. Something like the nail art design might include the positions of the hands, the angle of the pseudo-photographic shot, and instructions for tweaking the prompt to produce different manicure styles and themes. PromptBase takes a 20 percent commission, and prompt writers retain ownership of their work — although the copyright status of AI art and prompts is largely untested waters.
Paying $2 to $5 for a paragraph of text might seem like a strange purchase, and the idea of paid prompts doesn’t sit right with everyone using these systems. But after buying the nail art design mentioned above, I was curious about what it took to make a good commercial AI prompt — and how much money was actually in it. PromptBase put me in touch with the designer, Justin Reckling, to talk about it.
The following has been condensed and lightly edited for clarity.
How and when did you get into prompt engineering? Did you have particular skills that made you good at it?
I got into prompt engineering in April of 2022 when I was able to get my hands on OpenAI’s GPT-3 text generation tool. I quickly found that I had a knack for it and was able to create some great text-to-image prompts with it. My related skills include programming and software quality assurance. Plus, I have a good eye for aesthetics which helps me create prompts that are visually appealing.
Do you come at prompt writing primarily from the perspective of being an artist, being a coder or engineer, or something else?
I see prompt writing from the perspective of an artist, coder and engineer. I use my programming experience to help me understand how the service may interpret my prompt, which guides me to more effective tinkering with it to coax the results I’m after. Every word in a prompt has a weight associated with it, so trying to work out what works best and where becomes a core asset in the skillset. My background in software quality assurance is a pretty big driver in that “what happens if” style of thinking. Being overly verbose growing up has been a sort of blessing in disguise as well. It feels very liberating to have that as an asset now.
How many prompts do you sell in a typical day / week? Do you have a sense of what people buy them for?
I typically sell between three and five prompts per day, with each prompt averaging two to three sales within a month or two. I currently have an inventory of 50 prompts, with new ones being added regularly. The majority of prompts that have sold seem to be for pleasure rather than business purposes.
How do you decide what you’re going to make and sell? Is it based more on your personal interests or demand in the community?
It’s a mix of both personal interests and demand from the community. I want to make things that people will find helpful and inspiring, and it’s great when those two things overlap. I also have to keep an eye on what’s selling well so I can understand the needs of the community and continue to provide what they’re looking for. I use the “most popular prompts” carousel list on the main page. We’ll be getting our hands on some seller-specific metrics soon.
What’s your most popular prompt?
Block Cities has the most sales. My highest views vs. purchases prompt would have to be my T-Shirt Product Shots.
How do you start constructing a prompt?
After I have a rough idea of what I want to accomplish, I try to narrow things down to people, places and things – the core actors or main drivers in the scene I’m trying to construct. I use the service to generate a few rough prompts to get a feel for what the scene might look like. I find it much easier to take something that works well and then add on to it rather than having to go back and remove things until it looks better. You start with the big important strokes and then work in the finer details.
How much research do you do into whatever you’re trying to generate? If you’re making nail art, for example, do you have to learn things like nail terminology and preferred hand poses, or are you going by intuition?
I do a fair amount of research for each text-to-image prompt I create. I start by asking GPT-3 subject matter questions to help me get a better understanding of the scene I’m trying to create. For example, if I’m creating a prompt about someone getting a manicure, I might ask, “Someone is getting a manicure done; explain what you’re seeing.” This allows me to get more specific details from an expert rather than having to rely on articles or other sources of information that might not be as accurate.
Are there particular skills or tricks you’ve learned as you’ve worked that make prompting easier?
When creating text-to-image prompts, it can be helpful to use quotations to separate main ideas. In addition, it can be helpful to become familiar with terms like “hyper-realistic,” “macro photography,” “octane render,” “hyper-detailed,” “cinematic lighting,” “long shot,” “middle shot,” etc. This will give you a better understanding of how to add depth and detail to your prompts and also help you control the distance and focus. For example, you could add the phrases “cinematic lighting” and “golden hour” to the end of the prompt above to create a more refined and specific image.
Your visual work seems mostly DALL-E based, but how different is the prompt construction process for other systems like Midjourney?
It really depends on what you’re looking for and what you need the prompt to do. If you want something that’s more polished and professional, like a stock image substitution, then DALL-E is probably your best bet. However, if you’re looking for something more creative and hands-on, then Midjourney might be a better option. With Midjourney, you can adjust the weights of words, decide what resolution you want, and do other customizations. But keep in mind that it takes more time and effort to get the results you want.
What does adjusting the weight of words do?
Increasing the weight increases the strength of the “flavor” of that word, so there’s a greater chance that it will manifest in a more noticeable way. Conversely, you can reduce weights as well as needed. You do this by adding two colons and a number. Each word has a weight of 1 out of the gate, “hot dog::1.5” increases dog’s weight by 1.5 times, where 0.5 would cut it in half.
So reducing the weight of “dog” would make it more likely you’d get the food instead of an actual dog?
That’s correct, and increasing it may give you a very attractive dog or one that might be looking for a drink of water.
On a side note, I do enjoy Midjourney quite a bit. I would imagine more of my prompts would be Midjourney based, but up until recently, only DALL-E prompts were accepted by PromptBase, so that’s where I spent most of my effort.
It’s also worth noting that there is a text-to-image generator called Stable Diffusion that you can run locally on your computer. However, you need a fairly powerful video card to run the model, so it’s not as widely accessible as it could be. I believe that, in the long run, locally run models that are free from restrictions will eventually surpass the big players in the market. I’ve been experimenting with this quite a bit lately.
The ability to tinker with your prompts without having to spend a lot of money is a big draw for me. Right now, I have to spend $10 to $15 in credits for each prompt I create to get the results I want.
Comparing that with the earlier numbers, it sounds like you’re spending more on each prompt than you’re making in sales.
Yes, I need to sell around 5 to 10 of a given prompt to break even. Some of them don’t take long to generate, and as I get better at finding text to reuse between prompts, I’ll need fewer variations to reach my end goal. Investing in this technology is worthwhile in the long run, as interest continues to grow in the use cases for it. I’m also learning skills that I can apply to other models, so I don’t feel that it’s much of a drawback at the moment.
This also sheds some light on the value of prompts. There are many people out there who criticize what I’m doing, but most of the time, they just see the end result and none of the effort behind getting to that final destination. It’s a hindsight thing to them. Of course, anyone can type those words, but can you figure out how to get manicured hands in a consistent pose on the first prompt? The consistency of the prompts’ exceptional results is a great source of value as well.
Even if the monetary cost of this discovery plummets, a certain degree of time and effort went into the final words in that prompt, which will always hold value.
How do you think about ownership of your work? Do you have a sense of whether your prompts are protected by copyright, and how much do you care about that?
I don’t think about ownership of my work too much — I just try to create something that I’m proud of and that others will enjoy. As for copyright protections, I’m not too worried about it since I’m paid for revealing my work. I think our society should provide social safety nets, like universal basic income, to help those in the creative field who might be struggling financially. This will become increasingly important as automation continues to affect different professions.
I saw you did some GPT-3 text prompts, too. Can you write an AI text prompt that would automatically generate AI art prompts?
I have a trained model over at OpenAI that I’ve just been given permission to share that’s available at typestitch.com. It’s been trained on quite a bit of data from real-world prompts, so it can take a keyword or two and generate sample prompts for you to try for fun or just to give you some concept ideas to fiddle with.
I use the model every day to help me get the creative juices flowing or, at the end of the day, to come up with some random craziness to share with friends. It’s never been to the point where I’ve wound up selling a prompt that’s been generated as-is, though. The needs of the audience are still far too nuanced to reliably generate a favorable prompt right out of the gate. But with enough examples, a model can give you a lot of new and strange ideas to enjoy playing around with.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.