DEI - Diversity, equity, and inclusion refers to organizational frameworks that promote the fair treatment and full participation of all people, particularly groups who have historically been underrepresented or subject to discrimination on the basis of identity or disability. Employers globally have the opportunity to play their part in contributing to a more just world, and if they don’t, they risk being dismissed by, especially, younger generations. It’s not for naught that a company’s success relies heavily on how they view people, especially and thankfully, in these, for lack of better wording, “woke” times.
If you need proof, which seems unlikely, studies by McKinsey & Company found that companies with more diverse workforces perform better financially, with a direct correlation identified between a diverse workforce and higher profits. In terms of gender diversity, Sheila Flavell CBE, Chief Operating Officer, FDM Group, put it wisely when she said:
“Companies cannot expect to remain competitive in a global economy if they are not tapping into 50% of the potential workforce; therefore recruiting a gender diverse employee population isn’t just the right thing to do, it is also the smart thing to do.”
Knowing which words trigger a person not to apply for a job is tricky business, and you need to conduct profound research to know what not to write, or you can use ChatGPT to help you sort it out. In order to get the correct information out of ChatGPT, you have to know how to phrase prompts correctly. We’ve given examples of prompts to help you attain the desired information.
ChatGPT can help you identify and replace gendered language with more inclusive terms. For example, instead of using words like "he" or "she," you can use "they" or "their." Furthermore, it can help you avoid words that speak more to one gender than the other.
Examples of prompts that you can use:
“Give me examples of how this text could become more gender-neutral”
“Identify what in this text is not gender-neutral”
“Rewrite this text so that it is gender-neutral”
Identifying what kind of language is biased and exclusionary is something that ChatGPT can point out for you. Many who write job ads try to spice it up by adding “fun” words, but that can be detrimental to the performance of your job ad. Not everyone appreciates terms like "rockstar," "ninja," or "guru" because they can reinforce stereotypes and discourage candidates who don't identify with those traits.
Examples of prompts that you can use:
“What is biased language, and how do I avoid it in writing?”
“Identify and extract biased language in this text”
“How could I rewrite this text so that it is less biased”
Let ChatGPT help you craft job descriptions that emphasize the skills and experience needed to succeed in the role rather than rigid qualifications that may exclude otherwise qualified candidates. Instead of stating a specific degree or certification, focus on the skills and experience needed to perform the job.
Examples of prompts that you can use:
“Help me craft a job ad for a programming job that emphasizes skills and experience rather than strict requirements”
“Rewrite this job ad so that it focuses more on skills and experience rather than strict requirements”
ChatGPT can help you identify and incorporate inclusive language that resonates with a diverse pool of candidates. For example, use language that promotes diversity and inclusion, such as "We are committed to building a team that reflects the diversity of our community."
Examples of prompts that you can use:
“Help me rewrite this with more inclusive language:”
“What should I change in this text to make it more inclusive?”
Finally, consider collaborating with a diverse group of stakeholders to review your job ads and provide feedback. ChatGPT can help you facilitate these conversations by generating prompts and questions that promote constructive feedback and dialogue.
Despite being the hottest bot out there, ChatGPT itself is absolutely filled with biases itself because it was built by humans on the information made by humans. In fact, when we asked it the question, “Would you say you, ChatGPT, are biased in certain topics?” it replied;
"As an AI language model, I do not have personal beliefs or opinions, and I am not capable of being biased in the traditional sense. However, my responses are generated based on patterns and associations found in the language data that I was trained on, which includes text from a diverse range of sources and perspectives.
It is important to note that biases can exist within the language data that I was trained on, which could impact the accuracy or inclusivity of my responses on certain topics. Additionally, biases can also be introduced by the way that users interact with me, such as through the types of questions asked or the language used in those questions.
To mitigate these potential biases, my training data is carefully curated and my developers work to continually improve my accuracy and inclusivity through ongoing testing and evaluation. However, it is important for users to also be aware of potential biases and to critically evaluate the responses that I provide."