Creative Cloud owner unveils ‘creative-centric’ generative AI

Ben Wodecki, Jr. Editor

October 21, 2022

4 Min Read

Creative Cloud owner unveils ‘creative-centric’ generative AI

Adobe has unveiled a slew of new products and tools for creatives, including AI-augmented post-production solutions, at its annual Max conference this year.

“Over the past few years, creative expression has become a widespread desire,” wrote Scott Belsky, CPO for Creative Cloud at Adobe. “From new social media platforms to our efforts to stand out in school, or at work through our ideas, creativity has become a vital skill for everyone.”

Generative AI

Among the biggest announcements by Adobe was its bit on generative AI, with the likes of DALL-E, Stable Diffusion and Midjourney having soared in popularity this year.

Adobe announced it would be developing creator-centric Generative AI that gives artists and other creators control over their style and work.

It also announced it is leveraging the Content Authenticity Initiative (CAI) standards, which gives creators the ability to attach provenance data to digital content.

Such a move would “ensure that creators get credit for their work, and for those who view a piece of content understand who made it and how it was created and edited,” Adobe said.

Adobe has pledged to invest in research and product design talent to develop its generative AI approach to allow the eventual integration of creative-centric generative AI into its Creative Cloud tools.

Belsky wrote in a blog post that while Adobe still is early in its generative AI journey, one eventuality is for AI in Photoshop to generate “rich, editable” Photoshop Document (PSD) files.

“The AI can generate a dozen different approaches and you can pick the two or three you want to explore further, using Photoshop’s full selection of tools to transform the AI-generated image into something that faithfully reflects your creative perspective,” he said.

Adobe’s use of generative AI will not be limited to images but it will look to use the tech across mediums including video 3D design, texture creation, logo design and more. Even with their versatility, however, these AI tools ultimately assist instead of replace humans.

“We see generative AI as a hyper-competent creative assistant,” he said. “It will multiply what creators can achieve by presenting new images and alternative approaches, but it can never replace what we value in art: human imagination, an idiosyncratic style, and a unique personal story.”

Cleaning up old images

Adobe also showcased Photo Restoration Neural Filter, a new tool designed to clean up old images in a couple of clicks.

Powered by the company’s Sensei AI engine, Photoshop users can fix spots, tears and faded colors more easily.

It is essentially a repurposed extension of its Neural Filters for handling major edits such as smoothing skin. That feature, first shown in 2020, was built alongside Nvidia.

This latest version, which solely focuses on photo restoration, is currently available in beta and allows users to easily perform what Adobe described as a “formerly arduous task.”

Substance 3D

Just a week after Meta, Microsoft and Accenture announced a partnership to bring VR to the enterprise, Adobe announced some VR news of its own.

The company unveiled Substance 3D, a new application for creating 3D objects. The Substance 3D suite contains Modeler, a tool allowing users to create models using a VR headset.

Modeler is used to make designs as if the creator is working with virtual clay in an immersive VR environment. It can also be used on the desktop.

Adobe said the headset will provide users with “the tactile sensation of creating with your hands, while the desktop makes it easier to add details and nuance.”

Demos of upcoming projects

There were various demos at Max '22's Sneak showcase, where Adobe teases projects and software it is working on.

Among the demos at Max ’22 was Project Instant Add, a way for creatives to add VFX effects in post-production by using AI to automatically map graphics to the chosen element in the video. 

Also shown was Project Magnetic Type, a tool allowing users to fuse shapes into live digital text. Using an object detection model, Magnetic Type extracts swashes – typographical flourishes − from calligraphic images for use in building text-based projects, like logos.

The AI-powered Project Blink lets users find sought-after information or highlights in videos.

Currently available in beta, Blink sees users search specific terms and selects a portion of a video they want the system to analyze. The AI-powered tool then automatically scans that portion of the video and turns it into a new clip.

Project All of Me uses AI to generate un-cropped components of photos. It can also edit out aspects of an image and modify the photo subject’s apparel.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like