By Charity Nginyu

A Mathematics teacher at the Dschang Bilingual High School Dchang, Jean Bonheur Tchouafa has called on Cameroon’s Minister of Secondary Education, Nalova Lyonga to resign from her duties.

He made the call in a video now gone viral, while responding to some accusations by some groups of persons, tilted toward him.

In the seven-minute video, the teacher criticized Minister Nalova Lyonga for staying silent during the strike period, despite efforts by the Higher Education Minister to appease teachers at his level.

‘I reiterate that Nalova Lyonga must resign because, during the strike, she didn’t take any action to solve the crisis. On the contrary, she worsened the situation,’ Jean Bonheur said.

He continued, ‘ When the teachers of the Higher Education engaged in strike action, we saw Minister Jacques Fames Ndongo wet jersey, summon meetings, send out communique so that the teachers should not continue the strike. This wasn’t the case with Nalova.’

In the video, the Mathematics teacher acc
used Minister Nalova Lyonga of owing teachers payment for the services rendered during Covid 19, for distant teachings.

The impassioned educator’s call for Minister Lyonga’s resignation reflects broader discontent within the education community in Cameroon.

Source: Cameroon News Agency

With more than half of the world’s population poised to vote in elections around the world this year, tech leaders, lawmakers and civil society groups are increasingly concerned that artificial intelligence could cause confusion and chaos for voters. Now, a group of leading tech companies say they are teaming up to address that threat.

More than a dozen tech firms involved in building or using AI technologies pledged on Friday to work together to detect and counter harmful AI content in elections, including deepfakes of political candidates. Signatories include OpenAI, Google, Meta, Microsoft, TikTok, Adobe and others.

The agreement, called the ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections,’ includes commitments to collaborate on technology to detect misleading AI-generated content and to be transparent with the public about efforts to address potentially harmful AI content.

‘AI didn’t create election deception, but we must ensure it doesn’t help deception flourish,’ Microsoft President Brad
Smith said in a statement at the Munich Security Conference Friday.

Tech companies generally have a less-than-stellar record of self-regulation and enforcing their own policies. But the agreement comes as regulators continue to lag on creating guardrails for rapidly advancing AI technologies.

A new and growing crop of AI tools offers the ability to quickly and easily generate compelling text and realistic images – and, increasingly, video and audio that experts say could be used to spread false information to mislead voters. The announcement of the accord comes after OpenAI on Thursday unveiled a stunningly realistic new AI text-to-video generator tool called Sora.

‘My worst fears are that we cause significant – we, the field, the technology, the industry – cause significant harm to the world,’ OpenAI CEO Sam Altman told Congress in a May hearing, during which he urged lawmakers to regulate AI.

Some firms had already partnered to develop industry standards for adding metadata to AI-generated images that w
ould allow other companies’ systems to automatically detect that the images were computer-generated.

Friday’s accord takes those cross-industry efforts a step further – signatories pledge to work together on efforts such as finding ways to attach machine-readable signals to pieces of AI-generated content that indicate where they originated and assessing their AI models for their risks of generating deceptive, election-related AI content.

The companies also said they would work together on educational campaigns to teach the public how to ‘protect themselves from being manipulated or deceived by this content.’

However, some civil society groups worry that the pledge doesn’t go far enough.

‘Voluntary promises like the one announced today simply aren’t good enough to meet the global challenges facing democracy,’ Nora Benavidez, senior counsel and director of digital justice and civil rights at tech and media watchdog Free Press, said in a statement. ‘Every election cycle, tech companies pledge to a vague set
of democratic standards and then fail to fully deliver on these promises. To address the real harms that AI poses in a busy election year … We need robust content moderation that involves human review, labeling and enforcement.’

Source: Ghana News Agency

By admin