Ethics part of curriculum as Singapore inks AI training partnership with Google Cloud | ZDNet

Participants of Singapore’s artificial intelligence (AI) apprenticeship scheme have to go through training that includes ethics and governance, so they are aware of potential issues involving data bias. AI Singapore (AISG), which manages the apprenticeship, adds that it has inked a partnership with Google Cloud to tap the vendor’s AI tools and best practices. 

In fact, AISG had adopted the country’s Model AI Governance Framework since it was introduced in 2019 into its processes, and provided a simplified version that its apprentices and engineers could follow, said Laurence Liu, AISG’s director for innovation. 

Launched in 2017 by the National Research Foundation, AISG is a national programme that aims to build AI capabilities and demonstrate how AI technologies can be applied to daily lives and boost efficiencies for businesses. To date, it has facilitated the deployment of more than 30 AI projects and developed AI tools, including open source RPA tool, TagUI, which it has clocked more than 70,000 downloads on Github.

The Singapore agency’s partnership with Google Cloud would see its apprenticeship scheme as well as 100E and Makerspace programmes leverage the vendor’s AI and machine learning tools. 

This meant that businesses working with AISG could choose to do so on Google’s technologies, Liu said. He added that the partnership was not exclusive, noting that his team worked on other cloud platforms including Microsoft Azure and Amazon Web Services (AWS).

Singapore wants widespread AI use in smart nation drive

With the launch of its national artificial intelligence (AI) strategy, alongside a slew of initiatives, the Singapore government aims to fuel AI adoption to generate economic value and provide a global platform on which to develop and testbed AI applications.

Read More

Singapore in 2019 unveiled a national AI strategy to drive the adoption of the technology across the city-state and position it as a global platform on which to develop and testbed AI applications. At the same time, the government had underscored the need to anticipate social challenges that AI could create by maintaining public trust, including assessing the ethical use of AI and data. 

The AI Ethics & Governance Body of Knowledge was released last October as a reference guide for organisations to adopt AI technologies responsibly and highlight ethical aspects related to the development as well as deployment of AI technologies. The document was developed based on the Model AI Governance Framework, which was updated in January last year.

Liu noted that AISG’s AI Apprenticeship Programme comprised an initial two months of “deepskilling” in AI engineering that included an AI ethics module. Participants would have to pass a test following the two-month training before they could proceed to the next phase, where they would spend seven months working on a real-world AI problem via AISG’s other programmes such as 100 Experiments (100E) and AI Makerspace Brick. 

Including ethics into the training modules ensured its engineers were aware of potential issues related to data bias and governance, he said in response to ZDNet’s question via video. 

He suggested that recent news headlines about bias in some AI applications were the result of poorly trained software engineers who applied available APIs (application programming interfaces) to extract data, without first ensuring the veracity or relevance of the data samples to the problems they were trying to solve. These engineers were likely not trained or schooled in data mining or data curation, he said, stressing that datasets should be “balanced”. 

“Humans are part of the process in designing the collection of the data and humans [by nature] already are bias. So the data generated will already be biased,” Liu said. He noted that such risks could be minimised if data or AI engineers understood the domain and sources where data were extracted, recognised where there might be bias, and knew to correct it. There also were tools now to automate some of these tasks, which served to further improve the quality of datasets, he said. 

He added that AISG engineers had a checklist to ensure there was no data bias before any data was approved for use to train AI models. 

Paul Wilson, Google’s public sector managing director, noted that the adoption of AI, like any new technology, required some thought. It could be tapped for different purposes, both good and bad, so the key here then was to ensure ethical deployment, Wilson said in a video interview with ZDNet. 

He pointed to Google’s AI Principles, first published in June 2018 and most recently updated in July last year, which outlined the US vendor’s approach to the ethical deployment of the technology and incorporating policies and regulations. 

He added that the company aimed to work closely with governments to address any questions and offer the tools and expertise to navigate such issues. 

Google in 2019 had formed a group to debate the ethical implications of AI, only to disband it weeks later. It then had pointed to the “current environment” as a reason.  

The US vendor had said it would not develop AI for weapons or applications that would cause harm, but would “work with governments and the military in many other areas”. 

Others such as Microsoft, Amazon, and IBM either banned or temporarily halted the sale of facial recognition software to police departments, over concerns about racial discrimination. 

Tapping AI to detect cracks

For Oceans.ai, though, AI technologies could prove useful in improving efficiencies in how equipments were inspected and defects detected, particularly, in certain environments such as on offshore oil and gas platforms. 

The startup approached AISG’s 100E to develop an AI-powered asset inspection and reporting tool for industrial deployment. The team is using a range of Google Cloud tools to build the AI engine, including Firebase, Cloud Run, and Cloud Storage, and is targeting for the platform to be deployed by year-end. 

When implemented, it could resolve a key challenge of inspecting and identifying defects on equipment located in remote locations, said Vinod Govindan, Oceans.ai’s co-founder and managing director. 

If the AI platform could do so with uploaded images, or potentially via real-time visual data, it would minimise the need for humans to be deployed to carry out inspection in these remote locations, Govindan said during the video interview with AISG. 

Any amount of automation would be helpful in improving the safety and speed of inspection, added Siva Keresnasami, Ocean Atlantic International’s engineering and operations director and Oceans.ai’s co-founder. He noted that the need to do so was particularly pertinent amidst the global pandemic, where it was more difficult to send out resources to these locations. 

The team currently was using drones to capture and collect images, but also was exploring options to deploy autonomous robots and crawlers to capture data for assets located underwater, Keresnasami said.

The focus for now was on training the AI model to detect equipment for corrosions, cracks, and deformation of equipment, which accounted for the majority of issues related to assets, Govindan noted. There also was a reasonable size of datasets already available to train the model across these three areas, he said. 

There were fewer images available for faults that were less common such as missing parts and leakages. This posed a challenge in training the AI model to more accurately inspect and detect such faults, since it was learning from a smaller set of data, he said. He added that cracks and deformation also did not occur as frequently as corrosions, which could skew results generated by the AI engine. 

He noted that his team was looking to gather sufficient to feed and train the AI model. He pointed to possible collaboration with other technology and service providers in the industry, which might be interested in tapping the AI platform, to provide additional images to build on the datasets and bump up the volume. These partnerships, if established, would have to agree to data protection policies, he said.

Wilson said Singapore, with its investment in AI as well as focus on global trade, tourism, and engagement with citizens, could surface some interesting use cases and lessons on the technology. In particular, he said he would keen to see the learnings from AISG’s various programmes. 

“The partnership with AI Singapore sees us working jointly to spearhead new applications of cloud AI to fundamentally change business models and advance innovation in Singapore,” he said. “In doing so, we hope to play a role in sustaining the nation’s national competitiveness and transforming Singapore into a global hub for AI solutions.”

The Singapore government last month said it would spend more than SG$500 million ($371.86 million) this fiscal year to drive the adoption and deployment of AI in the public sector. This figure would account for 13% of its total spend on ICT procurement for the year. 

“AI can help the government to deliver better services, make better decisions based on data-driven insights, and optimise operations to increase productivity,” said GovTech, which manages the sector’s ICT initiatives. “To support government agencies in deploying AI, GovTech has built various central platforms to support common use cases in the area of video analytics, natural language processing, fraud analytics, and personalisation to help agencies reduce the cost of onboarding AI solutions. The central platforms also enable agencies to access common features and enjoy lower cost of management, maintenance, and updating of systems.”

RELATED COVERAGE

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.