Four Kitchens
Insights

How content strategy and prompt engineering unlock the powers of AI

5 Min. ReadDigital strategy

For organizations that traditionally find themselves short-staffed or otherwise lacking sufficient resources, generative AI seems like a promising, all-in-one solution. The technology already offers ways to streamline backend editing, accelerate content creation, and revolutionize chatbot interfaces. But where should you start, especially considering the landscape evolves so rapidly it remains a moving target?

As you consider next steps, questions about how AI functions should remain ongoing conversations within your organization. But along with establishing specific guidelines for use of AI, you also need to address the technology’s strategic considerations. That way, you’ll be able to implement the technology in a more effective way.

Helping your teams do more with less are often occupational necessities, especially for higher ed institutions and nonprofits. By incorporating your organization’s content strategy, you’ll unlock the potential of generative AI to empower your teams while preserving user trust.

Why generative AI is here to stay

The increased interest in generative AI stems from the rapid growth of deep learning algorithms called large language models (LLMs). Major brands such as Google, Microsoft, and Amazon are just a few companies that have rolled out LLM platforms.

These models are trained on massive amounts of information, such as the contents of a database — or the entire internet. So far, the OpenAI model of ChatGPT has proven to be the highest-performing LLM in terms of content creation. But each instance of that platform offers different skill sets.

For example, ChatGPT3 and 3.5 at times struggled with tasks such as word count, but GPT-4 delivers more consistently accurate results. The models vary depending on what you need from the technology. Companies such as Microsoft are already including AI functionality in their tools.

Generative AI is now reshaping expectations for online experiences. Data privacy, copyright, and mitigating bias are just a few ongoing issues as LLMs mine public websites for source material. Fundamentally, AI requires oversight to ensure its results are accurate, useful, and appropriate for your nonprofit or higher ed organization.

Image generated by Ideogram

This image was generated using this prompt: A visually appealing and conceptual illustration that showcases the importance of prompt engineering and a well-documented content strategy for higher education institutions and nonprofit organizations. The scene is set in a modern workspace, where a diverse team collaborates on integrating digital strategies with traditional knowledge. 

Prompt engineering enhances generative AI

Generative AI systems operate in response to a series of text-based prompts, which sounds deceptively simple. However, if you’ve experimented with AI tools, you’ve likely been unimpressed with the results after asking it to write something complex, like a blog post. The issue stems less from the technology producing a poor answer than the importance of asking the right questions.

AI is hampered by what one expert called “the articulation barrier,” which means you have to structure your prompts to give the technology adequate direction. Through prompt engineering, you can articulate the context for every task you would like the LLM to complete.

Tips for effective prompt engineering

Just as you would give a freelancer a content brief or RFP outlining project expectations, LLMs require the same direction. You have to consider and articulate every detail for generative AI to respond effectively. In fact, the more details you include, the less likely you are to produce an AI “hallucination” — a nice way of saying it lies.

Approaches to prompt engineering can vary, but you can improve your results by incorporating the following guidelines into LLM requests:

Always provide clear instructions and appropriate context

Context can include goals, desired outcomes, relevant keywords, or expertise level. If you want generative AI to write as an expert on a subject, you have to provide it with that instruction.

Specify the format and length of each output

Whether you want an article, email, or blog post, you have to include the type of content you want produced and a target word count.

Incorporate examples of messaging and tone where possible

You can refer LLMs to your organization’s content to inform its results. However, you should consider any copyright issues or proprietary information before sharing content with AI. Companies are not always transparent about how submitted materials train their models.

Ask the LLM to adopt a specific persona

You can improve the voice and expertise level of what an LLM produces by asking it to write as an expert in a given field.

Be prepared to iterate

Any output generative AI produces requires editing and enhancement. If the results don’t meet your expectations, try reframing the question or adding more context or other specifics.

How a documented content strategy guides AI implementation

As you consider incorporating AI tools into your organization’s content processes, you should revisit your content strategy. Brand guidelines for voice and tone as well as well-documented content objectives, audience needs, and business goals are all critical tools for building a foundation for prompt engineering.

Just as importantly, you should form an internal community to guide how your organization incorporates AI into its operations. Along with making sure your internal teams have proper training to start using and overseeing AI, you can also determine what support materials they may need. Documentation such as prompt libraries and templates to guide common tasks will ensure everyone is working with the technology from the same baseline.

You and your teams should also consider the greatest pain points where AI will provide the most value. For example, if generating alt text to meet accessibility standards is challenging for your content editors, AI can automatically generate image descriptions. Of course, AI-generated text requires oversight, but this is just one way it can streamline how your teams work.

Care and consideration are keys to implementing generative AI

Incorporating LLMs into how your higher ed or nonprofit organization works requires identifying how best to add value without introducing any harm with its limitations. Much like the human behavior AI aims to imitate, the technology simply isn’t perfect.

You need to make sure your teams are educated about biases within LLMs and AI hallucinations to protect your organization’s reputation. Without the right prompt engineering and editing, your content will quickly become generic, uninformative, or misleading. You still need oversight to ensure your organization and what it publishes stands out for the right reasons.

Many organizations are racing to get on board with AI. While the potential for optimizing how you produce content is enticing, you need the right guardrails to make sure the technology really works for you.