You can't stop progress. Just like our ancestors couldn't put the genie back in the bottle once they learned how to use fire, we can't slow down the adoption of Artificial Intelligence (AI). It's like trying to hold back a tidal wave with a single sandbag.
Sure, some folks might be scared of AI, just like they were scared of fire. And you can't blame them even if they ironically call themselves the "Future of Life." New things can be scary. But the truth is, we're living in a world where AI is already all around us. From our phones to our cars to our homes, it's already a part of our daily lives. And with NLP becoming more accessible every day, entrepreneurs are seeking ways to deploy it in every aspect of their businesses.
And let's face it, AI has the potential to do a lot of good. It can help us solve problems, make our lives easier, and even save lives. While some billionaires make claims that “population collapse” is a greater threat to humanity than even climate change, productivity gains from AI can fill the gap of a smaller population. So why would we want to hold it back?
It's like trying to put the toothpaste back in the tube once it's already been squeezed out. You just can't do it. So let's embrace AI, let’s put in place an ethical framework to govern our use of AI, and let’s pursue all the wonderful things it can do for us. Because the truth is, it's already here, and it's not going anywhere.
What would be a good set of ethical rules that self-regulating companies could follow when using AI? Could an industry self-regulating organization (SRO) to govern the use of AI be funded through regulated investment crowdfunding (#RIC) and repay investing members a dividend if / when any enforcement actions returned a “profit” to the operating entity?
Let’s tackle the first question here and save the second question for later. Here are some ethical rules that self-regulating companies could follow when using AI:
Transparency: Companies should be transparent about how they use AI and how they make decisions based on AI algorithms. They should disclose what data is being collected, how it is being used, and who has access to it.
Fairness: Companies should ensure that AI systems are designed and deployed in a fair and unbiased manner. They should avoid using data or algorithms that could discriminate against certain groups of people.
Privacy: Companies should protect the privacy of individuals by collecting only the data that is necessary and / or give individuals the opportunity to opt out from their data being used.
Accountability: Companies should be accountable for the decisions made by their AI systems.
Human oversight / humans in the loop: Companies should ensure that humans are making system design decisions and that humans are in the loop for any system design changes.
Safety: Companies should prioritize safety in the development and deployment of AI systems. They should ensure that AI systems are not causing harm to humans or the environment, and that they are not being used for malicious purposes.
Social responsibility: Companies should consider the broader societal impacts of their AI systems and work to minimize any negative consequences. They should also contribute to public discourse and policymaking around AI to ensure that its benefits are shared fairly and equitably.
Overall, this is a draft set of ethical rules – one that will no doubt evolve over time – and they should help guide companies in the responsible use of AI and help ensure that it is deployed in a way that benefits society as a whole.
Register for FREE to comment or continue reading this article. Already registered? Login here.5
It's not the AI that's a problem. In isolation, AI can't do much. It's connected AI that's a problem. So the AI bridges need to be a focus of any regulation. The bridge that connects AI to the self driving car ... or to the gene splicing technology ... or to neural link companies in people's brains.
Although there are some who are advocating for a complete freeze in the further development of AI. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ It's not going to happen. Now that VC money is rushing in, the growth will only accelerate.