Zeroheight-Design Systems in The Age of AI

Download as pdf or txt
Download as pdf or txt
You are on page 1of 50

Design Systems

in the Age of AI

Navigating the
hype and putting it
into practice
Table of
contents
The current state of artificial intelligence 03

How can we use AI today? 09

The design system prompt cheatsheet 28

Other ways to use AI within design systems 38

The future of AI 43
This guide is brought to you by

zeroheight, a platform that helps you

build design systems that everybody

loves!

Discover more great resources about

design systems, including articles,

webinars, reports and podcasts at

zeroheight.com

January 2024

02
The current

state of

artificial

intelligence
I think we can safely say we are in a hype stage for
AI. It seems you can’t swing a stick without hitting
something that claims to be AI powered, pulls data
from large language models (LLMs), or generates
a picture of you and your cat from a ten-second
snippet of your voice. Even zeroheight is diving into
the world of AI with some upcoming releases. But
is AI worth the hype?

First, it’s worth defining what we’re talking about


when we talk about AI. Science fiction has been
promising us human or near-human intelligence in
computers since computers were invented. IBM’s
Deep Blue defeated Garry Kasparov, the chess
grandmaster, way back in 1997. Google, Microsoft,
Apple and almost every other major tech company
have been investing in some form of AI or Machine
Learning (ML) in a major way since the nineties, so
why is it blowing up so much now? Has the
promise been delivered?

Well, yes and no. The hype today is mainly driven


by OpenAI’s ChatGPT. GPT, which stands for
Generative Pre-trained Transformer, has been
around since 2018, when OpenAI launched GPT-1.
GPT models are artificial neural networks that are
based on transformer architecture — a neural
network that can automatically transform one type
of input into another type of input. They loaded up
the GPT model with a staggering amount of
publicly available information from the internet

04
and put it to work. Since then, we’ve gone from 117

million parameters to somewhere around 1.7

trillion parameters in GPT-4, and it’s now capable

of generating text that is virtually indistinguishable

from human-written content. These LLMs grew so

fast between 2019 and 2021, that it outpaced the

predicted pace of AI growth, bringing us

capabilities that folks didn’t think would be

possible until at least 2030.

Then came ChatGPT… Interestingly, ChatGPT was a

marketing tool to show off the power of the

underlying GPT model and get developers to use it

via an API. They never intended it to be a consumer

product. However, the ease of use attracted a far

wider audience, pushing ChatGPT to have the

fastest-growing user base, going from 100 million

monthly active users to almost 13 million active

users per day! However, ChatGPT isn’t the only tool

out there.
Beyond ChatGPT
Until recently, ChatGPT was entirely text-based (as
of October 2023, they’ve recently launched multi-
modal to their pro offering). However, generative AI
(GenAI) can do more than text. First, you have your
image models, including DALL-E 2, Stable
Diffusion, Adobe Firefly and Midjourney. These take
prompts, both text and image-based, and use the
corpus of data that it has to create unique images
(how unique is another question…). When it comes
to UI design, none of the tools are quite there
(although UIzard is looking interesting). However,
image-based GenAI tools are fairly powerful when
it comes to creating illustrations or photography,
especially as you improve your prompt
engineering skills. Even more powerful is using
them to generate ideas or create better
placeholder content, which a designer or illustrator
can pick up and run with as a jumping-off point.

Beyond images, there are similar tools out there


for video, like Pictory, Synthesis and DeepbrainAI.
For audio, we have tools and APIs like Whisper,
Descript and AudioCraft. Not only can they process
audio but they can also train a model on existing
audio, and then generate new audio in a really
impressive way. If you want to be blown away, we
recommend checking out Wondercraft.ai. It takes
written articles and turns them into podcasts
based on your voice in no time.

06
One of the big failings of ChatGPT, until recently, was
that it didn’t have access to the live internet, because
it was trained on a set corpus of text that only went
up until 2021. While OpenAI has just announced a
beta of their Bing plugin, which gives access to live
pages, it’s still a little unpredictable. Google Bard is
one example of another LLM that has access to the
live internet, and with the heft of Google behind it, is
already snapping at ChatGPT’s heels.

Making LLMs work


with your data
LLMs are great out of the box, especially for
general tasks. However, the more complex a
problem you’re trying to solve or the more
personalized you need it, the less effective it can
be with the publicly available data. This is when it’s
time to feed your own data into the LLM to make it
work in your context. It will work directly from the
data you provide, instead of general knowledge
from the existing corpus, or writing in styles
directly related to the content you have already
created.

This is where training the data on your data sets


comes in. There are two ways to do this. The first
way is literally copying and pasting your content
directly into one of the web interfaces like

07
ChatGPT. However, there are some big caveats and
limitations to this approach. First, GPT-4 only has a
working memory of about 64,000 words (about 50

pages), so anything you’ve included in the


conversation before then is disregarded. Secondly,

unless you’re paying for the enterprise version of


ChatGPT, anything you paste could be included in
the corpus. So, putting anything private or

regulated will definitely be problematic, and even


if not, there are some security and privacy

concerns.

The second way requires using the API, but it also


gives you much more control. All you need to do is

create a custom GPT interface and create a script


that generates a JSON file of whatever content you

want to include (text.cortex have a simple guide).


In the AI world, this is called “fine-tuning,” and
involves layering your custom model (ie. your data)

on top of the base model. The OpenAI website has


great docs, including fine-tuning based on style

and tone, setting tight parameters for specific


queries (function calling), or providing structure
output based on specific data.

08
How can we
use AI today?
So, how can we

use it for design

systems?

Design systems aren’t a special case for GenAI. In

fact, the very nature of systems makes them

perfectly suited to being understood by LLMs, as

they work on rules, guidelines, best practices and,

well, systems. One of the most obvious use cases

for GenAI is documentation. Having a quick look

through all the major design system

documentation sites, from Carbon to Polaris to

Material, you’ll notice that we are all largely singing

from the same hymn sheets, as most of our

Western-centric design is focused on established,

tried and tested best practices. Using the

collective knowledge of everything out there to

help you start your documentation is a no-brainer.

Similarly, using AI to summarize large pages,

making all your docs queryable, or auto-

generating changelogs and release notes are just

some ways that AI can make the documentation

process more efficient. We’ll go into some of these

specific use cases over the coming chapters.

Similarly, AI can help on the production side of

design systems, too. From auto-generating token

names to generating color palettes to automated

testing and linting to autocompletes based on your

codebase, there are a plethora of ways to speed

up your processes or make them more informed

and intuitive. We’ll list a few of our favorite tools

we’ve used towards the end.

So, without further ado, let’s dive into the heart of it

and provide some practical use cases for you and

your team!

10
Prompt
engineering for
design systems
Most of this section will reference ChatGPT.

However, many of these tricks and prompts


work very similarly for Bard, NotionAI or just
about any other chat interface LLM.

It’s all about the


prompt
Let’s start with the basics; a prompt is what you
enter into ChatGPT to get it to do something. How
helpful ChatGPT’s answers are will depend on how

well-crafted your prompt is—typically, the vaguer


your prompt, the less valuable the response.

Sometimes you can over-rotate and be too


specific, yielding unhelpful responses, too. So it’s
about defining just enough in a prompt that

ChatGPT can be helpful. This tweaking and


optimizing is called prompt engineering.

It’s also worth noting that every conversation


window you have with ChatGPT comes with its own
baggage. ChatGPT’s memory is not infinite (it’s now

up to about 128K tokens in GPT-4 Turbo, which is


about 96,000 words), but it is enough to start

coloring your prompts. That is, whatever you’ve


entered as part of that particular chat window
could be used to inform future questions and

prompts. This is why it’s important that you begin a


new “chat” every time you start a different task.

However, if you want the GPT to be more likely to


remember your previous conversations, use the
same chat.

12
Step one
Set the context
Context is really important for ChatGPT.
Because the corpus of text that it’s trained on
is so large, and because people use it for all
kinds of tasks, it can sometimes get confused
with what you’re asking and hallucinate
strange things. To avoid this as much as
possible, it’s always best to start with context.

Identify a role or
persona
To avoid having to give context with every
prompt, we kick off a conversation with
ChatGPT to establish what mode we want it to
operate in, who the audience is and any
ground rules. To start, we give it an idea of
who to write as. Starting a prompt with "Act
as..." can be effective, especially when you're
looking to set a specific role or perspective for
the AI. This approach is particularly useful in
scenarios where you need the AI to adopt a
certain persona or expertise, like a character in
a story, a professional in a specific field, or
even mimicking a certain style of
communication.

However, it's not always necessary. If your


request is straightforward or doesn't require a
specific role-play, you can directly state your
question or the information you need. For
example, if you need information on design
systems, you could simply ask, "Can you
explain the key components of a design
system?" instead of saying, "Act as a design
system expert..."

In our case, as someone deeply involved in


design and UX, using "Act as..." will be
beneficial when you need responses that
mimic folks who write design system
documentation. It sets a clear expectation for
the type of response or the perspective you're
seeking. So we’ll start our prompt with:

Prompt Act as a design system


documentation writer.

14
Set the target
audience
Next, we need to set the target audience.
Letting the AI know who the target audience is
means that it can use specific language the
audience understands. However, it’s important
to think of who your target audience is and not
just anyone who might read the
documentation, so ChatGPT can provide a
focused response. In this case, something like:

Prompt Target audience: designers,


developers, and content designers at
various experience levels, from new
hires to seasoned professionals.

Provide the style


and format
Now it’s the style, or voice and tone of the
response. You don’t need to be overly verbose
or prescriptive here, just give it some
guidance. Remember, we already told it that it
needs to mimic how a design system
documentation writer writes. This is also
where you’d specify anything you would prefer
with formatting. In our case:

Prompt Style: professional, casual, concise,


plain language. Format: prioritize
bulleted lists over paragraphs.

15
Include your
specific request
Finally, if you’re asking it to do very specific
things, it’s also worth telling it the task you
expect it to do. For example, if you’re trying to
get usable documentation for specific
components, you could say, “I will ask you to
create design system documentation pages
for components in my design system. For each
prompt, the output should be a page that
documents that component.” However, if you
are using it for little bits of information to go
within a page or asking it to do different styles
of tasks, it’s best to leave this out.

Putting it all together, here’s the context-


setting prompt:

Prompt
Act as a design system documentation
writer. Target audience: designers,
developers, and content designers at
various experience levels, from new
hires to seasoned professionals. Style:
professional, casual, concise, plain
language. Format: prioritize bulleted
lists over paragraphs.

You have two options to set the context — first,


you can include something like the above as
the initial prompt in a chat. However,
remember that your chats only have a specific
window of memory. This is good if you are
doing a specific task and don’t expect to be
using the same chat for hours or days

16
Alternatively, you can use ChatGPT’s custom

instructions to define how you write and how

you would like ChatGPT to respond. You can

currently find this in the bottom left menu on

the desktop app, and you would write these

instructions in the “How would you like

ChatGPT to respond?” field.

The Custom instructions window in ChatGPT, where you can provide details

on how you’d like ChatGPT to respond.

17
Step two
Write your prompt
Prompting requires practice and tweaking. It’s
very rare that you write a prompt and get
exactly what you want first try. However,
understanding how to prompt and some of the
types of prompts can really help.
Zero-shot prompting

Zero-shot prompting is giving the LLM a

prompt with no context, which is useful for

quick responses or to generate ideas without

confining the results. For example:

Prompt

Give me a name for my design system

ChatGPT provides an answer based on generic

request without much detail.

19
One-shot prompting

One-shot prompting is giving one piece of

context to the LLM. This can help guide the

response and ensure it aligns with a key piece

of information. For example:

Prompt
Give me a name for my design system.

My company’s name is zeroheight.

With a little more detail, ChatGPT considers

the info in its response.

20
Few-shot prompting

The few-shot prompting strategy includes

using a few bits of context that are important.

Think of key pieces of information and some

type of guidance. The more examples

included in the prompt, the closer the

generated output should conform to what

you’ve defined. For example:

Prompt

Give me a name for my design system.

My company’s name is zeroheight.

Some design system names that I like

are:

- Carbon

- Polaris

- Material

ChatGPT incorporates the examples to

produce a solution aligned with them.

21
Providing feedback

One of the best ways to train the LLM is to

treat it like a conversation and provide

feedback, or further prompts once you have a

response.

Prompt

I like your rationale for the above,

but the company is more fun than

that. Come up with a name in a

similar style, but one that feels more

irreverent, while still containing

some meaning.

Feedback is a great way confirm the results or

have it generate improvements.

22
Similarly, you can ask the LLM to be self-

critical. This is especially useful if you’re asking

it to output code or formulas, but can also be

effective to check an answer to see if there are

any angles not covered. A good prompt for this

could be:

Prompt

Please re-read your above response.

Do you see any issues or mistakes

with your response? If so, please

identify these issues or mistakes.

Rewrite the original response with

these issues or mistakes rectified.

ChatGPT can also iterate based on reflecting

on its own response.

Likewise, if you end up going down a path you

are unhappy with, you can ask it to disregard

particular bits of information or potentially the

whole conversation.

23
Use variables

One more power-user move with ChatGPT is

to use variables. Like in programming, a

variable is a placeholder for a value. This

allows you to make the prompt act as a

loop, running the prompt again and again

for different values. While I’ve found this

mostly useful for non-design system related

things (e.g., automating email responses

comes to mind), there are definitely some

use cases here, especially if you want

ChatGPT to output streams of data about a

whole list of components. For example:

Prompt
Provide lists of three key do's and

don'ts for [component] based on best

practices, including accessibility and

usability.

[component] = card, button, carousel,

top nav

24
Using variables is a great way to have repetitive

work done by ChatGPT.

25
Creating custom GPTs

Providing context to your prompts is a way to

make the AI respond in a more predictable,

useful way. However, OpenAI just launched a

new way to do this that is much more

powerful.

As of November 2023, you can create your own

custom GPTs using the ChatGPT interface, and

provide your own knowledge to train the

model. This means you can upload PDFs, CSVs,

markdown files, JSON files or whatever you

need to help train your custom GPT to respond

based on your own data. While this fine tuning

has been possible via the API, it is now

available to everyone, making it much more

accessible.

To begin with, why not feed it existing

documentation to establish a voice and tone

for responses? Or if you’ve created personas of

your internal audience, feed those in so the

model can better understand who you’re trying

to talk to? Or if you’ve created your own

content guidelines, feed those in!

One thing to note with custom GPTs is that if

you’re feeding in proprietary content, make

sure you check in the OpenAI privacy controls

to opt out of feeding what you upload into the

broader OpenAI model.

26
How to use what
ChatGPT creates
Once you start to experiment, you can easily
have a whole bunch of AI-created content to
use with your design system documentation.
Or do you? One important note here is to
always check what ChatGPT outputs. This is
because LLMs aren’t always 100% accurate
and can come up with some wildly inaccurate
facts — and sound confident about it, too. In
the AI world, this is called “hallucinating,” and
while these models are trained consistently to
reduce the number of times this happens, it
does happen.

Instead, the best way to approach using LLMs


in your day-to-day work is as a helper. Some
examples of where it does well includes:
providing templates, or starting off points, or
providing content or copy for grunt work tasks
that based off of easy-to-understand rules.
Personally, we think it’s perfect for creating
templates, for generating names based off of
naming conventions, for tidying up data, and
for providing feedback on existing content.

27
The design
system
prompt
cheatsheet
Beyond generating copy, LLMs can do quite a
bit to boost your design system. For example,
it can generate color values, test for
accessibility, and write specific details. If
you’re looking for inspiration or a starting
point, here are several prompts that we tried
and tested with design systems in mind.

Creating color ramps


Prompt
Provide a color ramp, with 3 shades
and tints either side for the following
colors:

- [color 1]

- [color 2]

- [color 3, etc.]

Include a swatch, the hexcode, RGB


and HSLA values, as well as a name
that follows the [color]-[number]
format, with the numbers ranging
from 100 to 900, with 500 being the
base color, 100 being at 10% lightness
and 900 being at 90% lightness. 

[color] should be a common name for


the color, not a hexcode, and be
presented in camelCase. 

Present the information in a table.

29
Testing the accessibility
of color combinations
with the color ramp
Prompt
On the following colors, test the
accessibility of [white hexcode] and
[black hexcode] text on backgrounds
of each of those shades or tints, and
tell me whether they're level A, AA or
AAA WCAG 2.2 compliant on color
contrast.

- [color 1]

- [color 2]

- [color 3, etc.]

Present the information in a table.


ChatGPT provides an answer based on generic
request without much detail.

30
Generating token
names from a set
of variables

Prompt
Please list all the combinations I can

have for this token structure:

Theme.Component.Type.Size.State.Color

The variants are:

Theme: Dark, light

Components: Button

Size: Small, Medium, Large

Type: Primary, Secondary

State: Default, Disabled, Focus, Hover

Color: Background, Stroke, Shadow

Rules:

Ignore stroke for any small buttons, and

for primary medium or large.

Ignore background for any medium or

large secondary buttons.

Don't provide explanation, just a list of

the possible combinations considering

the rules.

31
Providing a template
for a component
documentation page
Prompt
Create a template for a page
documenting a single component
within a design system based on best
practices, and following the style of
Carbon, Gov.uk and Polaris. Delineate
with headings. Use real-world
examples for the sample content
based on a call-to-action button
component.

32
Writing best practice

do’s and don’ts

Prompt

[Attach image]

We are documenting the attached

[component name] as a component in

our design system. Considering the

attached file, please write a list of do's

and don'ts for the component

documentation page, considering best

practice on [component name] usage

within modern web [or app] products.

Writing accessibility

audits for

components

Prompt

[Attach file]

We are documenting the attached

[component name] as a component in

our design system. 

Considering the attached image,

please run an accessibility audit on

the button, comparing to WCAG 2.2

guidelines, including an audit on color

contrast, interaction guidelines and

suggestions for improvements.

33
Creating code from
screenshots
Prompt
[Attach file]

Create this [component name] as a


component for my design systems in
React, with four different states: [eg.
default, disabled, active and hover].
For each of these states, change the
style of this [component name]
according to best practices. Create
[number of desired variations]
different variations, with this image
being primary, and an inverted,
outlined version being the secondary.

34
Writing microcopy
Prompt
Consider a [component name] component.

Write a [number of desired variations]


variations of microcopy to go inside a
[component type] that encourages the
audience to [desired task and/or outcome].
The audience is [role(s) in context].

Make the microcopy no more than [number]


characters. Use plain language.

Explain why each example is effective.

Give [number] variations on “[phrase]” as


[component type] copy. Bias for [familiarity,
originality, effectiveness, coerciveness].

Here’s an example of this in use for a button


component:

Consider a button component.

Write 5 variations of microcopy to go inside


a “call to action” button that encourages the
audience to create an account for the free
version of our product. The audience is
designers and developers at B2B SaaS
companies.

Make the microcopy no more than 16


characters. Use natural language.

Explain why each example is effective.

Give five variations on “[phrase]” as call-to-


action copy. Bias for [familiarity, originality,
effectiveness, coerciveness].

35
Naming a component
Prompt
[attach screenshot]

In a design system, what is this


component named?

Prompt
What is a better name for a design
system component: [name 1] or [name 2]?

36
Rewriting
documentation in
the correct style

Prompt
Rewrite the following documentation:

[insert paragraph]

Consider the following rules:

add rules, for example:

- Maximum 200 words

- Informal language

- Accessible to all disciplines, and

free of jargon

- Gets to the point

37
Other ways to
use AI within
design
systems
It’s not just ChatGPT...
It feels like the majority of the conversation around
AI has been centred on ChatGPT. However, there
are loads of case-specific tools out there that can
help you, or even other ways of using
conversational LLMs like ChatGPT to achieve what
you need. Over the next few pages, you’ll find a
few other use cases for AI, and some other tools
that could help you in your day-to-day work.

As your decision
buddy and
sounding board
One of the ways I’ve seen a lot of people use
ChatGPT and similar conversational LLMs is to use
them as a sounding board for validating ideas and
rewording arguments. Because of the nature of
most of these models, they have a fair amount of
knowledge when it comes to most areas of your
job, as long as someone on the internet has written
something about it. Explaining a situation, giving
the LLM context, and then asking for advice on
pros and cons, how to make the argument more
convincing to your audience, or asking it to foresee
any risks can flag some things you hadn’t thought
of, or even confirm some thoughts you already
had.

39
Helping you code
There are a number of tools out there to help you
get the job done quicker when it comes to coding.
Here are a few:

Writing code

To be honest, ChatGPT is a pretty robust tool


for assisting in writing code. Explain what you
need, and what language you want it to output,
and it will create some relatively elegant,
usable code 95% of the time. However, tools
like GitHub Copilot and Tabnine are made for
the job. Both work directly in your IDE, and
while the text-to-code generator is cool, the
real power is in the smart autocomplete
functionality. It can also act as a debugger if
something isn’t working, where you can literally
select a chunk of text, write “fix it” and it’ll do
the job.

Running tests

Testing is tedious. However, AI is perfect for


these kinds of tasks. Being given instructions
and following them is its strong point, and it’s
no wonder that almost every testing suite
software now has AI features built in, from
Katalon to LamdaTest to BrowserStack.

40
Writing documentation

Obviously there is no substitute for well-written


documentation, but it is good to get a helping
hand sometimes. Writer from Mintlify generates
in-code documentation as you go, helping you
document your code as you’re writing.

Helping you write


Obviously, ChatGPT and zeroheight AI are great for
generating the basics, but there are a few tools out
there that lean on AI to take your writing to the
next level. Grammarly has been a lifesaver
internally for us at zeroheight, as you can train it on
your company’s (or your personal) voice, and have
it base suggestions from that. Having easy-to-
reach buttons that allow you to shorten, expand,
improve or change tone, especially in the context
of giving you feedback to begin with, is great.

If your business relies heavily on words, tools like


writer.ai are changing the game. Their custom LLM
focuses on writing for business, and we’ve seen it
work amazingly with UX writing teams.

41
Helping you design
As mentioned in the intro, we’re not quite at
the point where UX and UI designers should
be worried about their job. Instead, there are
some neat little tools that can help you make
your process a bit quicker. One huge time-
saver is Figma Autoname, which auto-
generates layer names in your Figma file.
Another simple tool is Fontjoy, which uses
machine learning to create visually pleasing
font pairings, with some simple controls to
provide some input. Similarly, Khroma is a nice
color suggestion tool based on things you like,
what’s popular and what works from a color
theory point of view.

Figma Autoname uses an LLM to analyze and understand what your layer is
and then offer an appropriate name

42
The future
of AI
What comes after
the hype?
As mentioned in the first chapter, we are
definitely in a hype cycle for generative AI. In
fact, Gartner recently placed generative AI on
the peak of inflated expectations, with
assumptions that it will reach broader
transformational benefit within the next two to
five years. We are in the over-saturation
moment. When generative AI burst onto the
scene in 2020, it felt new, and to be honest, a
little bit scary. We had deep fakes of world
leaders speaking out Beyonce lyrics, and we
were being told that there was a good chance
that AI would take our jobs. Move forward
three years, and we’re still riding that wave,
although now we’re also starting to see the
benefits. There are almost 60,000 AI
companies out there, with the number
doubling in the last three years, and a lot of
them are already showing great use cases for
AI. At the same time, you still have to sort the
wheat from the chaff. For every genuinely
useful service, there are three half-baked
attempts at applying AI to unneeded
problems, for the result to be a bit of a damp
squib.

44
Hype Cycle of Artificial Intelligence, 2023, Gartner

And we’re probably riding the crest of that


wave, about to crash into the trough of
disillusionment. ChatGPT saw the first drop in
monthly user numbers in June this year, and
with every new release of ChatGPT, a whole
slew of companies are closing up shop as the
behemoth incorporates those tools, features,
and ideas as part of it’s foundation. But this is
what we need. It’s kind of like the double
diamond. At the moment, we’re exploding out
the potential use cases and getting ridiculous
with our ideas. What comes next is the time
where we narrow down on what is actually
useful. Before then, there are a few more
issues we need to figure out though.

45
we are here

Discover Define Develop Define

Discover the Define what AI Build into our Widespread AI


wide range of should do
processes
adoption in our
applications
orgs

Figure out the Develop


Throw AI at guardrails
governance Get real world
every problem
around AI usage
data on
Test out the Establish the Actually build whether AI
boundaries of applications of the tools that helps or
what it can do AI we need hinders

The AI double diamond

46
The ethical and legal
issues with LLMs
The rise of generative AI brings huge potential
for most companies, especially when it comes
to streamlining tasks and processes. Yet, as
these models find their way into our day-to-
day, they stir ethical debates about bias,
privacy, and the changing nature of work. It’s
up to us to care about this, especially when
we’re dealing with a tech industry that moves
at lightning speed and legislative powers that
move at a much slower, considered pace.

Copyright and plagiarism are also big


concerns, and rightly so. Because of the way
LLMs work, everything a generative AI creates
is based on a real human’s work, yet because
most of the LLMs are closed boxes, the extent
to which that original work has influenced the
output is relatively opaque. Where does
influence become plagiarism? Similarly, the
work created by AIs has mixed status across
the world. In the U.K., AI-generated works can
be copyrighted with the “author” being the
operator of the AI, while the U.S. stance is that
AI cannot hold copyright. This inconsistency
underscores the need for international legal
clarity as LLMs become more prevalent in
content creation.

47
Within compliance, LLMs present both
opportunities and challenges. They offer to
streamline compliance processes and
enhance surveillance capabilities, but they
also introduce concerns over transparency
and the need for human oversight.
Compliance officers must navigate a rapidly
evolving regulatory landscape, ensuring that
the use of LLMs aligns with both ethical AI
guidelines and existing laws such as data
protection and copyright regulations.

And of course, there are the labor concerns.


While LLMs and AI could usher in an era of
automation and leisure, it could also just work
within our capitalist constructs to increase
wealth disparity, and force thousands, if not
millions, out of work. There’s also the fact that
the computing power required to run these
services are an incredible resource drain, and
won’t do the planet we live on any favors. 

As we embrace the efficiency and capabilities


of AI, we must also engage in robust
discussions about the ethical, legal, and
societal implications of their use (look at how
much we slept on social media, and the real-
world impact it’s now having on both a micro-
level with mental health, and on a macro-level
with disinformation, politics and elections).
The development of responsible AI requires a
concerted effort from designers,
technologists, legal experts, ethicists, and
users. By fostering a culture of transparency,
accountability, and continuous learning, we
can harness the potential of AI while
safeguarding against their risks, ensuring they
serve as a complement to human ingenuity
rather than a replacement.
How do we move
forward?
All of these concerns are valid, and we must
have a constant eye on how it’s evolving. For
the moment, we’d definitely suggest dipping
your toe in the water though. The biggest
piece of advice we would have is to try and
keep your use cases narrow. You can’t expect
generative AI to create a whole design
system, from design through to code, with a
single, two-line prompt. However, it can speed
up laborious and mundane tasks to a degree
that should allow us all to spend more time
thinking about innovating and putting our
users first.

The robots are


coming to zeroheight
zeroheight’s first AI feature will be released in
early 2024. Want to get a sneak peek and
have first access?

Get in touch with our Sales team and request


a demo today.

Request a demo

49

You might also like