New research from Flinders University highlights the need for artificial intelligence systems to complement rather than impede worker safety and welfare in Australian workplaces.
While Australia has taken commendable first steps towards responsible governance of AI, its current regulatory apparatus lacks the legally binding and workplace-specific mechanisms necessary to mitigate emerging risks, according to Future of Work expert Associate Professor Andreas Cebulla.
The research, published in the Journal of Industrial Relations, found that while early assessments of AI have focused on job automation and productivity gains, a growing body of evidence points to AI affecting workplace relationships, worker autonomy and psychosocial wellbeing.
Australian workplaces have seen rapid adoption of AI technologies including data entry automation, document processing, fraud detection and generative AI tools. However, these innovations also introduce risks of algorithmic management, erosion of tacit knowledge, digital incivility and devaluation of human labour.
Current governance frameworks fail to sufficiently address these relational harms, requiring a shift in how AI is conceptualised. Rather than viewing it simply as a technical tool or economic input, AI should be recognised as a social actor with the power to shape working relationships, identities and hierarchies.
The research identifies the integration of AI-related risks into Work Health and Safety regulations as a key gap in Australia’s policy response. Drawing on national and international data, the study identifies AI-related risks that affect workplace dynamics and employee agency.
Associate Professor Cebulla proposes a framework for managing risks grounded in job crafting, participatory oversight and expanded WHS definitions. The framework treats workers as co-designers rather than end-users of AI integration, building on existing industrial relations infrastructure including union representation and safety committees.
When job crafting is legitimised and supported, it enables workers to transform potential threats into sources of meaning and resilience. If AI tools are optimised for organisational goals such as efficiency and compliance, job crafting optimises for worker values including dignity, purpose and agency.
The goal is not to eliminate AI’s role but to co-produce workplaces that reflect operational accountability whilst positioning workers to transform potential threats into sources of meaning and resilience. AI tools do not merely automate, they reconfigure how decisions are made, who holds authority, how performance is interpreted and what kinds of labour are seen as legitimate.
Associate Professor Cebulla is an affiliate of the Flinders Factory of the Future and researches the future of work and technology, including the ethical use of artificial intelligence in workplaces and the social impacts of automation and new technologies.