WISeKey's CEO Carlos Moreira Discussed the Need of Establishing an Artificial Intelligence Watchdog at the Digital Banking & Payments Summit in Warsaw
WISeKey International Holding SA / WISeKey's CEO Carlos Moreira Discussed the Need of Establishing an Artificial Intelligence Watchdog at the Digital Banking & Payments Summit in Warsaw . Processed and transmitted by West Corporation. The issuer is solely responsible for the content of this announcement.
Lesen Sie auch
WISeKey's CEO Carlos Moreira Discussed the Need of Establishing an Artificial Intelligence Watchdog at the Digital Banking & Payments Summit in Warsaw
Geneva, Switzerland - February 19, 2019: WISeKey International Holding Ltd (WIHN.SW) ("WISeKey"), a leading Swiss based cybersecurity and IoT company, today announced that its CEO Carlos Moreira presented at the 7th Annual Digital Banking & Payments Summit in Warsaw, Poland (http://digital-finance.pl/en/). Over 800 visionaries, market leaders and representatives of more than 150 companies gathered this year in Warsaw to discuss the future of digital banking and payments, share cutting-edge-technologies designed to improve customer experience and put humans in the center of this transformational process.
During his speech, Mr. Moreira emphasized the need of establishing an Artificial Intelligence (AI) watchdog under the auspices of an international organization such as The OISTE Foundation (OISTE.ORG). While AI can meaningfully improve peoples' lives, it is important to build systems that make decisions free of human biases and manipulations. For example: while hiring by algorithm would give men and women an equal chance and sidestep racial and gender prejudice in policing, new studies show that computers can be biased as well, because AI systems are only as good as the data humans put into them. Bad data can contain implicit racial, gender, or ideological biases.
Many AI systems will continue to be trained using bad data, making this an ongoing problem. Thus, a third-party independent body is required to ensure transparency, fairness and regulate automated
AI decision-making.
As per ethicist Jake Metcalf of Data & Society, more social scientists are using AI intending to solve society's ills, but they don't have clear ethical guidelines to prevent them from
accidentally harming people. Currently there aren't consistent standards or transparent review practices. The guidelines governing social experiments are outdated and often irrelevant, meaning
researchers have to make ad-hoc rules as they go.