AI - Screenwriters return to work for first time in nearly five months while actor await new negotiations



Artificial intelligence, commonly abbreviated as AI, refers to the development of intelligent machines that can perform tasks that typically require human intelligence, such as perception, reasoning, and learning. AI can be categorized into two main types: narrow or weak AI, and artificial general intelligence (AGI) or strong AI.

Narrow or weak AI is the most common type of AI available today, and it is designed to perform a specific task, such as playing chess or driving a car. It relies on programmed algorithms to accomplish its task, and it does not have the ability to learn new things or adapt to new situations on its own. Narrow AI is used in a wide range of applications, from facial recognition software to virtual assistants like Siri and Alexa.

On the other hand, AGI or strong AI is a hypothetical form of AI that can perform any intellectual task that a human can do. It would have the ability to think abstractly, learn on its own, and make complex decisions based on its knowledge. AGI is still in the realm of science fiction, but many researchers and futurists believe that it may be possible to develop in the future.

There are several subfields within AI, such as machine learning, natural language processing (NLP), and computer vision. Machine learning is the process of teaching a computer to learn from data, without being explicitly programmed. It relies on algorithms that can recognize patterns in data and make predictions based on that data. NLP is the ability of a computer to understand and interpret human language, and it is used in applications such as chatbots and voice assistants. Computer vision involves teaching a computer to see and interpret images, and it has applications in fields such as self-driving cars and facial recognition software.

AI has many potential benefits, including increased efficiency and productivity, improved decision-making, and better healthcare outcomes. For example, AI can help doctors diagnose diseases more accurately and develop personalized treatment plans for patients. It can also help businesses automate tedious tasks and improve customer service. However, there are also potential risks associated with AI, such as job loss, privacy concerns, and the potential for AI to be used for malicious purposes.

One of the biggest challenges in AI development is creating machines that can truly understand and interpret human language. While NLP has made significant progress in recent years, there is still a long way to go before machines can truly understand the nuances of human communication. Another challenge is developing AI that is transparent and explainable, so that humans can understand how decisions are being made. This is particularly important in applications such as healthcare and criminal justice, where decisions made by AI can have significant consequences.

In order to develop AI, researchers need access to large datasets to train algorithms. However, there are concerns about the ethical implications of using personal data for AI development. There are also concerns about bias in AI, as algorithms may unintentionally reflect the biases of their developers or the data they were trained on. To address these concerns, there is a growing movement towards responsible AI development, which involves transparent and inclusive processes that take into account the potential impact of AI on society.

In conclusion, AI has the potential to revolutionize many aspects of society, from healthcare to finance to transportation. However, it also poses significant ethical and social challenges that need to be addressed. As AI development continues to accelerate, it is important that researchers, policymakers, and the public work together to ensure that AI is developed in a manner that is ethical, transparent, and beneficial to society as a whole.


Disclaimer
6do Encyclopedia represents the inaugural AI-driven knowledge repository, and we cordially invite all community users to collaborate and contribute to the enhancement of its accuracy and completeness.
Should you identify any inaccuracies or discrepancies, we respectfully request that you promptly bring these to our attention. Furthermore, you are encouraged to engage in dialogue with the 6do AI chatbot for clarifications.
Please be advised that when utilizing the resources provided by 6do Encyclopedia, users must exercise due care and diligence with respect to the information contained therein. We expressly disclaim any and all legal liabilities arising from the use of such content.

The perils of economic forecasting in uncertain times

Financial Times

23-05-14 12:19


The credibility of economic forecasts has come under scrutiny in 2021 as central banks have struggled to accurately predict the impact of multiple shocks including the pandemic, geopolitical shifts and wars. High-profile errors have made investors and politicians question the accuracy of forecasts, although it has also been pointed out that economists have faced highly uncertain situations in the past two years. Experts argue that sourcing knowledge from beyond the economics profession and improving communication could help ensure forecasts are seen as reference points rather than foresight, building trust in the face of uncertainty.

https://www.ft.com/content/e5cff3d4-af37-47e6-9d95-e8b7c55a088d
OpenAI’s Sam Altman nears $100mn funding for Worldcoin crypto project

Financial Times

23-05-14 11:19


Sam Altman's blockchain company Worldcoin is looking to raise a further $100m to help create a secure global cryptocurrency. The identities of the investors taking part have not been revealed, but one of the group is understood to be an existing stakeholder, with others joining the round. The company was set up in 2019 by Altman and Alex Blania and has so far kept a low profile, with Altman preferring to focus on his role as CEO of OpenAI. The identity verification system, which uses iris-scanning technology, has raised concerns over privacy.

https://www.ft.com/content/f1de2aee-ee13-45e1-bd61-0a269cd650d3
Investors eye AI start-ups to harness tech, offer solutions for mass market

South China Morning Post

23-05-14 10:13


AI start-ups in Hong Kong and the Greater Bay Area are increasingly being sought after by investors as AI technology is adopted across various industries. The launch of ChatGPT3 last year has heightened awareness of AI's applications and its impact on everyday life. Market observers are observing the huge potential of AI-related businesses and commenting on how they are investing in and collaborating with many start-ups in the Greater Bay area and beyond. Despite some investors playing it safe amid a volatile economy, rising interest rates and pandemic-related recessionary fears, investment in promising start-ups is still taking place.

https://www.scmp.com/business/article/3220517/ai-powered-start-ups-gain-interest-among-investors-who-want-harness-tech-offer-solutions-mass-market
The Turkish deepfake porn video could change the future of elections

Telegraph

23-05-14 10:00


The Turkish Presidential election has been marked by accusations of foreign meddling and “fake news”. Last week Muharrem Ince, who was polling only around two per cent ahead, pulled out of the presidential race. Ince, who had previously refused to step aside for Kemal Kilicdaroglu, who is seeking to unseat increasingly autocratic Turkish leader Recep Tayyip Erdogan, claimed a sex tape involving him had been doctored and was being used to smear his campaign. Elections are due to be held on 24 June, with polls suggesting Kilicdaroglu is on 49.3% to Erdogan’s 43.7%. Analysts claim such dirty tricks from Russia might be part of a greater effort to encourage Turkey to move closer to the Russian sphere of influence. The world is closely watching the election, as Turkey plays a critical role in controlling the flow of refugees into Europe and remains a large Muslim democracy in a region hardly overflowing with them.

https://www.telegraph.co.uk/news/2023/05/14/turkey-deepfake-elections-erdogan-muharrem-ince/
AI data the next flashpoint between media and technology

The Sydney Morning Herald

23-05-14 09:30


News Corp and Nine are lobbying the Australian government to extend the media bargaining code to include artificial intelligence (AI) companies. Commercialised search engines from firms such as Google and Microsoft currently leverage information from content on news media sites to provide users with answers when using their systems free of charge. While Nine's CEO, Mike Sneesby, believes AI-enhanced search constitutes a risk to publishers, like Facebook and Google, he also acknowledges the opportunities AI presents. By using AI to collate footage from their archival content, Nine could create documentaries faster.

https://www.smh.com.au/business/companies/ai-data-the-next-flashpoint-between-media-and-technology-20230508-p5d6k5.html
OpenAI readies new open-source AI model - The Information

Reuters

23-05-15 22:36


OpenAI is reportedly set to release a new open-source language model, according to an anonymous source cited by The Information. Its ChatGPT models gain popularity in Silicon Valley as investors see generative AI as the next growth area for tech firms. While Microsoft announced a multi-billion dollar investment in OpenAI earlier this year, Alphabet already has its own similar project, Google AI, with Meta Platforms rushing to catch up by also releasing an AI product capable of creating human-like written content. It is not expected that OpenAI's new model will compete with GPT. The firm did not comment on the reports.

https://www.reuters.com/technology/openai-readies-new-open-source-ai-model-information-2023-05-15/
There’s no such thing as a digital native

Financial Times

23-05-16 04:24


The term “digital native” has lost its relevance, according to Stephen Bush in the Financial Times. Bush, who was once a digital native, argues that as the idea of digital nativeness has shifted, as have brand new sets of skills and knowledge, making the definition less useful. What it means to be a digital native has changed drastically because the definition of “digital” has changed too, Bush claimed. Children starting school for the first time now will have only a partial memory of the world before the rise of artificial intelligence (AI), which will change the way machines interact with people.

As technology moves forward, software evolves alongside hardware, making it both easier and harder to use. While children from younger generations may be familiar with smartphones and tablets, they lack the understanding needed to grapple with deeply entrenched policy issues like regulatory structures required for tackling cybersecurity, according to Bush. Digital natives may be more familiar with the world of ecommerce, but in terms of wider political implications, the title is now less useful.

As new technologies continue to emerge, the idea of “digital native” will continue to be eroded and the divisions between those who understand these technologies and those who do not will become less clean-cut, Bush argued. The idea that a new generation of digital native children will be able to regulate the tech industries and resolve tricky policy issues is wishful thinking, he explained.


https://www.ft.com/content/9851a259-f438-4cd2-8cb5-1997298c1b86

The race to bring generative AI to mobile devices

Financial Times

23-05-16 04:22


Advancements in generative artificial intelligence (AI) could transform mobile communications and computing at a faster pace than expected, according to the Financial Times. Tech firms have been attempting to embed generative AI into their software and services but faced higher computing costs, and increased internet search users come to expect AI-generated content in standard search results. By running generative AI on mobile handsets, costs could be lowered and services such as chatbots could be far cheaper for companies to run. Smaller, open-source models have made the technology more available to businesses wanting to use generative AI in their own services.

https://www.ft.com/content/6579591d-4469-4b28-81a2-64d1196b44ab
Amazon to add ChatGPT-like search to its online store

South China Morning Post

23-05-16 04:00


Amazon is seeking to compete with Microsoft and Google by adding an AI search tool similar to ChatGPT to its web store. The online retailer has posted job listings seeking a software development engineer and other staff for its product search function, with one post stating the company was seeking applicants to re-imagine search with an interactive, conversational experience. Early AI search innovations by Microsoft and Google have produced glitches in response to early questions, but by combining machine learning with search tools they offer a potentially more useful way of searching for products.

https://www.scmp.com/tech/big-tech/article/3220679/amazon-add-chatgpt-search-joining-google-and-microsoft-generative-ai-race
To Compete With China on Tech, America Needs to Fix Its Immigration System

Foreign Affairs

23-05-16 04:00


The US has a chip talent shortage, and this is attributable to the complex US immigration system. According to Google’s former CEO, Washington needs to remove needless complexities to make its immigration system more transparent and create new pathways for the best minds to come to the US. While the US’ dysfunctional system is putting off talented experts, other countries are attracting them. For example, China is particularly pro-active with President Xi Jinping declaring that “the competition of today’s world is a competition of human talent and education." The nation has begun spending money to woo back native-born STEM graduates, and Chinese engineers and scientists who moved abroad to work are being offered powerful incentives to return home. By comparison, the UK's High Potential Individual visa program is specifically aimed at graduates of some of the world’s best universities. However, immigration reform in the US has been blocked for years, despite bipartisan support for common-sense reform.

To confront the great geopolitical challenges facing the US in the coming years, the US government should make a concerted effort to identify and recruit top researchers from around the world. Attracting exceptional scientists will allow the US to maintain its technological edge. The US government has a successful history of using such a strategy, and during WWII succeeded in attracting exceptional talent, including such luminaries as Albert Einstein and Enrico Fermi. Today, Washington needs to do more to attract leading scientists and entrepreneurs, including those from non-aligned or even hostile states.


https://www.foreignaffairs.com/united-states/compete-china-tech-america-needs-fix-its-immigration-system

Are killer ro­bots the fu­ture of war?

Al Jazeera

23-05-16 03:08


Killer robots, driven by developments in artificial intelligence (AI), are transforming the future of conflict and prompting intense debate over the ethical, legal, and technological implications of their use. While many nations have invested heavily in developing lethal autonomous weapons systems (LAWS), including China, Iran, Israel, South Korea, the UK, and the US, global consensus over their use and regulation remains elusive. A report from the United Nations suggests that the Turkish-made Kargu-2 drones marked a new era in warfare as they attacked combatants in Libya in 2020 without an officer directing the attack or a soldier pulling the trigger. A blanket ban on autonomous weapons systems does not currently look likely, but there is a growing call for regulation, with some experts suggesting a global taboo of the kind in place for chemical weapons.

Advocates suggest that autonomous weapons systems could eliminate human error and bias, reduce accidental human casualties, and carry out some battlefield tasks without endangering human soldiers. However, critics argue that machines that make life and death decisions must not be allowed in the field without human oversight. There are ethical concerns over emotionless machines making such decisions, and it may be challenging to determine who is accountable if a robot commits a war crime. The international community has yet to agree on a definition of autonomous weapons systems and may struggle to achieve global consensus on how to approach their regulation.

As autonomous weapons become increasingly sophisticated and are deployed on the battlefield, the potential implications of their use on international law and ethics and their impact on human rights remain unclear. Countries such as Russia have already expressed their objections to legally binding instruments, and more research is needed to determine what types of weapon or scenario are particularly problematic. While researchers suggest that the beneficial technology used in autonomous weapons systems could improve car safety systems, trying to put control measures in place once a device is operational is difficult. A two-tier set of regulations could be more realistic, with some systems prohibited and others allowed only if they meet a strict set of requirements.


https://www.aljazeera.com/features/2023/5/16/are-killer-robots-the-future-of-war

AI boom, China’s waning recovery to boost SOE, tech stocks rally: brokerages

South China Morning Post

23-05-16 07:29


Recommended by CSC Financial, Western Securities and Northeast Securities, a rally in Chinese state-owned enterprises (SOEs) and technology stocks is expected to receive a further boost in H2 2021. This follows low valuations and attractive dividend payouts found in listed SOEs, as well as an increase in artificial intelligence investment in the technology sector. SOEs in the onshore market have shown resilience throughout 2021, despite a weaker than expected economic recovery from Covid-19. The sub-gauge index of technology stocks in China has climbed 11% so far this year, with a measure of central SOEs gaining 9%.

https://www.scmp.com/business/china-business/article/3220710/chinas-soe-tech-stocks-rally-strengthen-traders-seek-shelter-faltering-economic-recovery-say
Is China about to raise fees for international university students?

South China Morning Post

23-05-16 07:00


Senior Chinese education experts are calling for an increase in university tuition fees for international students to compete with UK and US universities and attract better students. Beijing-based researcher, Liu Jin, whose research programme is funded by the National Natural Science Foundation of China, and his team are recommending that the standard fee of Y20,000 ($2,800) be raised to about Y100,000 ($14,300). Chinese tertiary education institutions have charged a flat rate across all ports since 1998. The proposal argues that the increase would allow Chinese universities to provide better educational services. Data published by the College Board in the US shows that the average tuition fee at American public four-year universities was $26,820 per year in 2020-21, while fees in the UK vary between institutions and study programmes, ranging from £10,000 ($12,400) to £38,000 ($47,300) per year. China became the third-largest destination for international students after the US and the UK, with almost 500,000 students in 2019.

https://www.scmp.com/news/china/science/article/3220678/china-about-raise-fees-international-university-students
AI can help solve writer’s block, claims Pet Shop Boys’ Neil Tennant

Telegraph

23-05-16 07:00


Pet Shop Boys’ member, Neil Tennant, has said that artificial intelligence (AI) could be deployed as a tool by musicians suffering from writer’s block. Although concerns have been raised that machine learning could ultimately make human artists redundant, Tennant said AI offered some benefits, such as allowing songs to be completed more easily. He cited a demo of a bot conducted by the band’s manager’s 15-year-old daughter, which had produced a song in the band’s style. The pair also criticised the “ghettoisation” of children’s TV programmes on dedicated channels.

https://www.telegraph.co.uk/news/2023/05/16/pet-shop-boys-neil-tennant-ai-solve-writers-block/
This AI hoax should terrify woke journalists

Telegraph

23-05-16 07:00


The Irish Times published an opinion piece which was created and submitted by a pseudonymous prankster using AI. The column claimed that wearing fake tan is racist, and gained much traffic before being removed by the editors, who then apologised for not spotting the prank. The Times' situation is a valid excuse as the accusations in the column are consistent with the type of accusations made against white people by left-wing columnists at publications such as The Guardian.

https://www.telegraph.co.uk/columnists/2023/05/16/irish-times-ai-hoax-fake-tan-racist-woke/
ChatGPT creator to warn congress of ‘urgent’ AI risks - follow live

The Independent

23-05-16 13:04


OpenAI CEO Sam Altman will testify before the US Senate Judiciary Subcommittee on Privacy, Technology and the Law regarding the risks of artificial intelligence (AI) and the need for rules to avoid them. The move follows calls by Senator Richard Blumenthal, chairman of the committee, for "rules and safeguards" to be put in place to address the potential benefits and "pitfalls" associated with AI. Other tech industry representatives being called on include IBM CPO Christina Montgomery and Gary Marcus from New York University.

https://www.independent.co.uk/tech/sam-altman-ai-congress-live-chatgpt-openai-b2339688.html
The CEO behind ChatGPT is testifying. Here’s what to expect.

Washington Post

23-05-16 12:32


The CEO of OpenAI, Sam Altman, has warned the US Senate of how AI chatbots such as his own company's risked undermining data privacy, intellectual property, competition, and US democracy. Altman made his debut appearance in the Senate this week, with senators exploring how AI chatbots could both inadvertently produce misinformation as well as how they can be used for disinformation, such as via deepfakes. Altman's chatbot has recently exploded in popularity. Senators largely played down talk of a grilling, seeking input to rein in AI chatbots like ChatGPT for legislation rather than any contentious vote ahead of the hearing. AI bias issues, as well as copyright and antitrust considerations, were also raised, with some Republican senators fearing potential intrusive surveillance capabilities.

https://www.washingtonpost.com/politics/2023/05/16/ceo-behind-chatgpt-is-testifying-heres-what-expect/
Beijing to provide state-funded computing resources to AI firms

South China Morning Post

23-05-16 12:00


The Beijing government has published a draft policy which would provide state-provided computing power to support the city’s artificial intelligence industry. The policy aims to “grab and seize” opportunities in developing LLMs and artificial general intelligence, along with three other areas highlighted: computing power, training data, and regulations. The proposal requires public cloud providers to collaborate and pool their computing power for use by Beijing-based tertiary institutions, research facilities, and small and medium-sized enterprises, while the use of Chinese-language data will be improved through “cleansing.” The government is open to public feedback on the draft policy until Friday.

https://www.scmp.com/tech/policy/article/3220736/chinas-capital-beijing-provide-state-sponsored-computing-resources-ai-firms-amid-chatgpt-frenzy
OpenAI chief set to call for greater regulation of artificial intelligence

Financial Times

23-05-16 11:19


Sam Altman, chief executive of OpenAI, will tell a Senate subcommittee in the US that legislation on artificial intelligence (AI) is important but regulation should allow companies flexibility to take advantage of developments in the technology. Altman, whose company created the AI chatbot, ChatGPT, will testify before Congress for the first time. The testimony is being given as governments and regulators globally are scrutinising AI as its use becomes more commonplace. Last week, the EU introduced a stringent and comprehensive set of rules on the use of AI, including restrictions on chatbots such as ChatGPT. Altman's testimony will recommend a set of safety requirements for companies, along with licensing or registration conditions for AI models.

https://www.ft.com/content/aa3598f7-1470-45e4-a296-bd26953c176f