In the recent Fair Work Com­mis­sion deci­sion Mr Bran­den Dey­sel v Elec­tra Lift Co.[2025] FWC 2289, Deputy Pres­i­dent Slevin applied a crit­i­cal lens to the use of Chat­G­PT by the Appli­cant who was seek­ing an exten­sion of time to make an appli­ca­tion to deal with con­tra­ven­tions involv­ing dismissal.

One fac­tor the FWC exam­ines in con­sid­er­ing an exten­sion of time is the mer­it of the sub­stan­tive appli­ca­tion that would be allowed to pro­ceed. This is where, as Deputy Pres­i­dent Slevin observed, arti­fi­cial intel­li­gence had an unhelp role to play in this mat­ter (at para­graph 6):

As to the mer­its of the claim, Mr Dey­sel con­firmed dur­ing the con­fer­ence that he had used an arti­fi­cial intel­li­gence large lan­guage mod­el, Chat GPT, in prepar­ing his appli­ca­tion. So much was clear from the defi­cien­cies in the appli­ca­tion which failed to address the mat­ters required to make good a claim that Part 3 – 1 of the Fair Work Act had been con­tra­vened. The appli­ca­tion also includ­ed an extract from advice giv­en by Chat GPT which was that var­i­ous employ­ment and oth­er statu­to­ry oblig­a­tions had been con­tra­vened by the Respon­dent. The advice sug­gest­ed that Mr Dey­sel com­mence var­i­ous legal actions against the Respon­dent, includ­ing mak­ing appli­ca­tion under s. 365 of the Act. I can see no basis for this advice.”

The Deputy Pres­i­dent con­tin­ued his cri­tique of Chat­G­PT and its use in this con­text (at para­graph 7):

Chat GPT also advised Mr Dey­sel to con­sult a legal pro­fes­sion­al or union rep­re­sen­ta­tive to deter­mine the appro­pri­ate course of action. He did not do so. Mr Dey­sel sim­ply fol­lowed the sug­ges­tion made by Chat GPT and com­menced the pro­ceed­ings. The cir­cum­stances high­light the obvi­ous dan­ger of rely­ing on arti­fi­cial intel­li­gence for legal advice. The result has been Mr Dey­sel com­menc­ing pro­ceed­ings that are best described as hope­less and unnec­es­sar­i­ly wast­ing the resources of the Com­mis­sion and the Respon­dent in doing so.”

To put the crit­i­cism in con­text and per­spec­tive, the use of AI by the Appli­cant did not make a mate­r­i­al dif­fer­ence to the out­come of the exten­sion of time appli­ca­tion. Gen­er­al Pro­tec­tions claims involv­ing dis­missal need to be brought with­in 21 days of dis­missal unless there are excep­tion­al cir­cum­stances”. This appli­ca­tion was brought 919 days after the end of the Appli­can­t’s employ­ment, which result­ed from his res­ig­na­tion, rather than a ter­mi­na­tion by the employ­er. The Appli­can­t’s sub­mis­sions that he was lack­ing aware­ness of his work­place rights and was con­cerned about ret­ri­bu­tion from his for­mer employ­er were reject­ed. In rela­tion to the for­mer, the FWC has reg­u­lar­ly held that igno­rance of rights is not a rea­son to excuse delay. Con­sid­er­ing the lat­ter, it was held there was no evi­dence sup­port­ing the assert­ed con­cern. Fur­ther, the Respon­dent cit­ed the prej­u­dice it would suf­fer if called upon to respond to events from over 2 years ago, in cir­cum­stances where it was nev­er put on notice that the ter­mi­na­tion would be chal­lenged. This exten­sion of time appli­ca­tion was nev­er going to be suc­cess­ful, irre­spec­tive of the use of Chat­G­PT, although it is con­ceiv­able that the advice” giv­en to the Appli­cant by AI, for which the Deputy Pres­i­dent found there was no basis”, may have led to, or embold­ened, a deci­sion to bring the appli­ca­tion before the FWC

Obser­va­tions

Some obser­va­tions:

  • This deci­sion illus­trates the risks and dan­gers of using Chat­G­PT (or oth­er AI appli­ca­tions or mod­els) for the prepa­ra­tion of FWC appli­ca­tions and responses.
  • These risks and dan­gers are par­tic­u­lar­ly acute in a juris­dic­tion such as the FWC where prin­ci­ples of fair­ness, which require val­ue judg­ments, need to be care­ful­ly con­sid­ered. The work of the FWC does not lend itself to a for­mu­la­ic approach, slav­ish to pre­vi­ous cas­es that may osten­si­bly have sim­i­lar facts. It is a trite propo­si­tion that each case turns on its own circumstances.
  • While it made lit­tle dif­fer­ence in this case it is only a mat­ter of time before an oth­er­wise mer­i­to­ri­ous appli­ca­tion or response is under­mined by the mis­use of AI apps or mod­els. Courts and tri­bunals fre­quent­ly deal­ing with self-rep­re­sent­ed lit­i­gants need to be actute­ly aware of the pos­si­bil­i­ty it is being used. If it is, and it leads to a sound case being poor­ly argued or pre­sent­ed, that is the fault of the par­ty seek­ing to rely on the tech­nol­o­gy. The FWC should give no con­ces­sion or lat­i­tude to any par­ty for hav­ing made that decision.
  • AI may have a role to play in FWC pro­ceed­ings but, at this stage in its devel­op­ment, it needs to be used judi­cious­ly and its out­put treat­ed with a healthy degree of scepticism. 

If you would like to repub­lish this arti­cle, it is gen­er­al­ly approved, but pri­or to doing so please con­tact the Mar­ket­ing team at marketing@​swaab.​com.​au. This arti­cle is not legal advice and the views and com­ments are of a gen­er­al nature only. This arti­cle is not to be relied upon in sub­sti­tu­tion for detailed legal advice.

Sign up for our Newsletter

*Mandatory information