In our most recent blog post, Four Kitchens Director of Technology Mike Minecki told you a bit about how Machine Learning — as Mike put it — is “the scientific study of algorithms and statistical models that computer systems use to perform a specific task effectively without using explicit instructions, instead relying on patterns and inference.” Mike discussed some of the technical details of how Machine Learning (aka ML) can help content creators with time-consuming tasks like building out metadata and content tagging, creating summaries, and even moderating comments.
Today, I’d like to focus on the other part of that equation: the human experience that ML algorithms enable.
It’s a people thing
At Four Kitchens, we like to say that we build BIG websites. Well, a big part of those big sites is the effect they have on both users and the people in our partners’ organizations. That’s why we make the user experience (UX) central to everything we do. It’s why we help our partners consider communications strategies to engage users for both external and internal projects. And it’s why human emotions played a big role in this ML experiment.
All content, no matter how technical, tells a story. It’s how we humans experience the world. Understandably, images are a vital part of that storytelling. The right graphics can boost reader engagement and evoke the kind of emotion that makes a story, event, product, or brand memorable. But finding the right photos, especially for organizations that maintain massive image collections, can eat up a huge amount of time and bog down even the most efficient workflows.
When we began experimenting with how ML and artificial intelligence (AI) could benefit our partners, faster and more effective content creation seemed a natural place to start. But we also wanted to see whether we could use ML to generate content that helped people connect with one another.
Come on, get happy
At DrupalCon Seattle 2019 this past April, we introduced the results of our experiment: HappyGram.ai. This humble little Drupal site demonstrates how ML can streamline content-creation workflows by helping you sort through your image collection to find the right graphics for a given story. For organizations with massive banks of photos and other graphics, this capability alone can make ML worthwhile.
But HappyGram does more than sort photos. It also demonstrates how ML can create a genuine connection between people.
HappyGram.ai uses Google Natural Language API, AutoML Model, a free stock photography database, and Drupal to show how easy it can be to combine storytelling and visual elements that automatically fit the content and mood of any piece. HappyGram analyzes text, searches a specified graphic collection for appropriate images, and applies filters to reflect sentiment and enhance mood.
The result is a highly shareable entity — we call them “HappyGrams” — that are constructed from real-life human experience and emotions. The experiment is fairly rudimentary and somewhat limited by our use of all free tools, but it shows the potential of ML where content creation is concerned.
See it for yourself
Mike and I, along with several of our Web Chefs, are getting ready to head to New Orleans for ONA19, the annual Online News Association conference. We’ll be demonstrating HappyGram in the ONA Midway on Friday, September 13. If you’ll be at the conference, stop by! You’ll be surprised by how quickly this simple ML app can handle a chore that is typically time intensive—and even make it fun.
Imagine being able to automate the initial selection of photos for any piece of content — even recorded interviews that need to be transcribed. Then come see it in action! If you won’t be at ONA19 (or can’t wait until Friday), you can read more about how we created HappyGram — and see some examples of past demonstrations — at https://www.happygram.ai/about. Have questions about how ML can simplify or speed your content process? Contact Four Kitchens. We’d love to talk with you about the possibilities.