Using Artificial Intelligence
Generative artificial intelligence (AI) technology can be a useful tool to help social media professionals elevate the way social media is used. However, AI comes with some risk and should be used to complement work, not replace it.
Platforms such as Facebook, Instagram and Twitter are already using AI algorithms to curate feeds for their audiences. These algorithms take insights from an individual’s engagement, interests and online behaviour into consideration when pushing out content, allowing experiences to be personalised.
AI platforms can be used to help with generating ideas for social media content, community engagement and ads management, but it cannot be relied on alone and should always include a human element.
As generative AI is changing rapidly, we’re still learning how these tools can be integrated into social media and how professionals can use AI to enhance their work.
Ways Generative AI can be incorporated
Content Creation – Text and Visuals
Generative AI tools can create videos, images, music, and text from prompts, speeding up the content creation process. Social media professionals can enter prompts into AI tools to create these pieces and instant feedback can be provided on both ends, cutting down on the time it takes to create content.
However, AI lacks the ability to differentiate fact from fiction, making human review essential to ensure accuracy and appropriateness. It also can reflect or amplify existing biases present in their training data.
When using AI tools, be aware of intellectual property and copyright issues. AI-generated content might unintentionally mimic existing works, leading to potential infringement. Always check that AI-produced material doesn't violate third-party rights and is appropriate for public sharing and assess the content for biased or discriminatory elements.
Chatbots
Chatbots can help to automate customer service experiences. They provide instant, real-time support to customers by answering frequently asked questions or redirecting customers to web links and call centres. Businesses can cut down on their social media response time as a result. However, Chatbots also have the potential to disseminate misinformation, provide inappropriate or insensitive responses and create risks in handling sensitive user data. Therefore, it’s important to develop robust guidelines, thorough testing, and ongoing oversight.
Ad Management
Some AI tools can analyse various ad management data including ad targeting, budgeting, content placement and provide performance insights. This allows for social media professionals to tailor campaigns and social advertising to their audiences, increasing clicks and conversions.
NSW Government Policy
Before adopting AI into business processes, all risks should be run through the AI Assurance Framework.Considerations of risks to privacy, security, inclusion, and copyright should be made.
When using AI tools:
- use a government email address when using AI for work purposes
- do not enter any sensitive or corporate information to the platforms, such as meeting notes, project information or any information protected by an NDA
- do not enter customer information into publicly available generative AI platforms
- ensure that any content generated by AI tools complies with intellectual property and copyright laws to prevent infringement
- evaluate AI-generated content for potential biases, accessibility, and best practice according to your brand’s guidelines and tone-of-voice
disclose when you’re using AI-generated content in the post copy. Here’s an example of what that may look like:
A NSW Government Facebook post showing multiple AI-generated images of gingerbread-themed NSW icons. A statement is included in the copy that reads ‘Generated by AI’.
For additional advice on using publicly available generative AI tools, refer to Cyber Security NSW’s guidance.
General risk factors
- the potential to cause discrimination from unintended bias
- insufficient experienced human oversight of the AI system
- over-reliance on the AI system, or ignoring the system due to high rates of false alert
- whether the linkage between operating the AI system and the policy outcome is clear.
Individual or community risk
- physical harms
- psychological harms
- environmental harms or harms to the broader community
- unauthorised use of health or sensitive personal information (SIP)
- impact on right, privilege, or entitlement
- unintended identification or misidentification of an individual
- misapplication of a fine or penalty
- other financial or commercial impact
- incorrect advice or guidance
- inconvenience or delay.