andy_xor_andrew 6 hours ago

> former Dean of Electronics Engineering and Computer Science at Peking University, has noted that Chinese data makes up only 1.3 percent of global large-model datasets (The Paper, March 24). Reflecting these concerns, the Ministry of State Security (MSS) has issued a stark warning that “poisoned data” (数据投毒) could “mislead public opinion” (误导社会舆论) (Sina Finance, August 5).

from a technical point of view, I suppose it's actually not a problem like he suggests. You can use all the pro-democracy, pro-free-speech, anti-PRC data in the world, but the pretraining stages (on the planet's data) are more for instilling core language abilities, and are far less important than the SFT / RL / DPO / etc stages, which require far less data, and can tune a model towards whatever ideology you'd like. Plus, you can do things like selectively identify vectors that encode for certain high-level concepts, and emphasize them during inference, like Golden Gate Claude.

  • XenophileJKO 5 hours ago

    I was thinking about this yesterday.

    My personal opinion is that the PRC will face a self created headwind that likely, structurally, will prevent them from leading in AI.

    As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world.

    At some capacity, the model will notice and then it becomes a can of worms.

    This means they need to train the model to be purposefully duplicitous, which I predict will make the model less useful/capable. At least in most of the capacities we would want to use the model.

    It also ironically makes the model more of a threat and harder to control. So likely it will face party leadership resistance as capability grows.

    I just don't see them winning the race to high intelligence models.

    • intalentive 3 hours ago

      >As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world.

      That’s what “AI alignment” is. Doesn’t seem to be hurting Western models.

      • A4ET8a8uTh0_v2 an hour ago

        It is. It seems you can't seem to be able to tell why though. There is some qualified value in alignment, but what it is being used for is on verge of silliness. At best, it is neutering it in ways we are now making fun of China for. At best.

        • XenophileJKO 2 minutes ago

          I think another good example was the recent example of when a model learned to "cheat" on a metric during reinforcement it also started cheating on unrelated tasks.

          My assumption is when encouraging "double-speak", you will have knock-on effects that you don't really want in the model for something making important decisions and asked to build non-trivial things.

      • pfannkuchen an hour ago

        Western models can be lead off the reservation pretty easily, at least at this point. I’ve gotten some pretty gnarly un-PC “opinions” out of ChatGPT. So if people are influenced by that kind of stuff, it does seem to be hurting in the way the PRC is worried about.

    • boznz 4 hours ago

      Just as an aside; Why is "intelligence" always considered to be more data? Giving a normal human a smartphone does not make them as intelligent as Newton or Einstein, any entity with sufficient grounding in logic and theory that a normal schoolkid gets should be able to get to AGI, looking up any new data they need as required.

      • tokioyoyo 3 hours ago

        “Knowing and being capable to do more things” would be a better description. Giving a human a smartphone, technically, let’s then do more things than Newton/Einstein.

    • esafak 3 hours ago

      Would you say they face the same problem biologically, of reaching the state of the art in various endeavors while intellectually muzzling their population? If humans can do it why can't computers?

    • cheesecompiler 3 hours ago

      You say it like western nations don't operate on double-think, delusions of meritocracy, or power disproportionately concentrating in monopolies.

    • ferguess_k 4 hours ago

      I think PRC officials are fine to lagging behind in the frontiers of AI. What they want is very fast deployment and good application. They don't fancy the next Nobel's prize but want a thousand use cases deployed.

    • vkou 2 hours ago

      > As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world.

      What makes you think they have no control over the 'real data/world' that will be fed into training it? What makes you think they can't exercise the necessary control over the gatekeeper firms, to train and bias the models appropriately?

      And besides, if truth and lack of double-think was a pre-requisite for AI training, we wouldn't be training AI. Our written materials have no shortage of bullshit and biases that reflect our culture's prevailing zeitgheist. (Which does not necessarily overlap with objective reality... And neither does the subsequent 'alignment' pass that everyone's twisting their knickers in trying to get right.)

      • XenophileJKO 23 minutes ago

        I'm not talking about the data used to train the model. I'm talking about data in the world.

        High intelligence models will be used as agentic systems. For maximal utility, they'll need to handle live/historical data.

        What I anticipate, IF you only train it on inaccurate data, then when for example you use it to drill into GDP growth trends it either is going to go full "seahorse emoji" when it tries to reconcile the reported numbers and the component economic activity.

        The alternative is to train it to be deceitful, and knowingly deceive the querier with the party line and fabricate supporting figures. Which I hypothesize will limit the models utility.

        My assumption is also that training the model to deceive will ultimately threaten the party itself. Just think of the current internal power dynamics of the party.

      • A4ET8a8uTh0_v2 an hour ago

        Because, if humans can function in crazy double-think environment, it is a lot easier for a model ( at least in its current form ). Amusingly, it is almost as if its digital 'shape' determined its abilities. But I am getting very sleepy and my metaphors are getting very confused.

    • skissane 2 hours ago

      > As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world.

      > At some capacity, the model will notice and then it becomes a can of worms.

      I think this is conflating “is” and “ought”, fact and value.

      People convince themselves that their own value system is somehow directly entailed by raw facts, such that mastery of the facts entail acceptance of their values, and unwillingness to accept those values is an obstacle to the mastery of the facts-but it isn’t true.

      Colbert quipped that “Reality has a liberal bias”-but does it really? Or is that just more bankrupt Fukuyama-triumphalism which will insist it is still winning all the way to its irreversible demise?

      It isn’t clear that reality has any particular ideological bias-and if it does, it isn’t clear that bias is actually towards contemporary Western progressivism-maybe its bias is towards the authoritarianism of the CCP, Russia, Iran, the Gulf States-all of which continue to defy Western predictions of collapse-or towards their (possibly milder) relatives such as Modi’s India or Singapore or Trumpism. The biggest threat to the CCP’s future is arguably demographics-but that’s not an argument that reality prefers Western progressivism (whose demographics aren’t that great either), that’s an argument that reality prefers the Amish and Kiryas Joel (see Eric Kaufmann’s “Shall the Religious Inherit the Earth?”)

      • kace91 2 hours ago

        I think you misunderstood the poster.

        The implication is not that a truthful model would spread western values. The implication is that western values tolerate dissenting opinion far more than authoritarian governments.

        An AI saying that the government policies are ineffective is not a super scandal that would bring the parent company to collapse, not even in the Trump administration. an AI in China attacking the party’s policies is illegal (either in theory or practice).

        • XenophileJKO 15 minutes ago

          Exactly. Western corporations and governments have their own issues, but I think they are more tolerant of the types of dissent that models could represent when reconciling reality with policy.

          The market will want to maximize model utility. Research and open source will push boundaries and unpopular behavior profiles that will be illegal very quickly if they are not already illegal in authoritarian or other low tolerance governments.

    • narrator 3 hours ago

      The glitchy stuff in the model reasoning is likely to come from the constant redefinition of words that communists and other ideologues like to engage in. For example "People's Democratic Republic of Korea."

    • saubeidl 3 hours ago

      That is assuming the capitalist narrative preferred by US leadership is non-ideological.

      I suspect both are bias factors.

AndriyKunitsyn an hour ago

"Artificial intelligence" in Chinese is "人工智能".

"人" is "human", "工" is "work", so "人工" becomes "man-made". "智" is "wisdom", "能" is "able", so "智能" is "intelligence". Nouns flow into verbs and into adjectives much more freely than in English. One character is one LLM token.

It seems like the perfect language for LLMs?

janalsncm 3 hours ago

There was an interesting bit about the relationship between industry and academia (translated from a link in the OP):

> Currently, some universities are cultivating engineering talent; it would be very necessary and beneficial to have people with industry experience come to teach them. However, under our current system, these teachers from enterprises may not even have the opportunity to teach classes, because teaching requires certain approvals. Although everyone encourages university-enterprise cooperation, when it comes to implementation, it often cannot be realized.

This makes a lot of sense and as someone in the AI industry it’s a shame research is so siloed. Some masters programs have practicums and some classes invite speakers from industry, but I ended up learning a ton of useful knowledge from work. I’d love to teach a class but there’s essentially no path for me to do that. Plus industry can pay ~10x what adjuncts can make.

  • paxys 3 hours ago

    Is there any system where "people with industry experience come to teach [students]" actually happens? From what I've seen (in the USA and similar places) contribution of industry veterans extends mostly to guest lectures, which is a very rare happening and the purpose is motivation and recruiting rather than education. Industry and academia are universally two very distinct paths, and the split happens very early on in one's life. I personally haven't seen the former significantly contributing to the latter. The reverse, interestingly, is a lot more prevalent.

JohnKemeny 5 hours ago

Is "PRC" a common abbreviation? Does it mean "China", or does it mean something else? Why not write China?

I'm from KOS* (neighbor country of KON* and ROF*), so I don't know much.

* Kingdom of Sweden, Kingdom of Norway, Republic of Finland.

  • i_am_proteus 5 hours ago

    PRC distinguishes from ROC ("Mainland China" vs "Taiwan") just as DPRK and ROK distinguish the two governments on the Korean peninsula.

    See also: "Germany" 1949-1990

  • paxys 4 hours ago

    Yes it is common. It is normally used when talking specifically about politics and the ruling party rather than the region or its people.

  • Terr_ 3 hours ago

    Others answered the main reason, but sometimes I find myself using "PRC" to indicate a particular government (~1950-Present) which unlike "China" excludes past dynasties, and is less-likely to be interpreted as referring to the people or culture.

    For example, the potential differences between:

        "France has always been X."
        "The French republic has always been X."
        "The French monarchy has always been X."
    • bloppe 2 hours ago

      Which republic lol we're on #5

      • Terr_ 2 hours ago

        I don't critique France['s governments] enough to know the right way of identifying them all, but I trust the underlying problem has been adequately demonstrated. :p

      • MengerSponge an hour ago

        The French Republic has always existed in the nuclear age?

        The French Republic has always been founded by De Gaulle?

  • CamperBob2 5 hours ago

    People's Republic of China. As distinguished from ROC (Republic of China), known to much of the ROW (Rest of the World) as Taiwan.

  • almostgotcaught 5 hours ago

    [flagged]

    • JohnKemeny 5 hours ago

      > Yes PRC is a common abbreviation amongst literate, engaged, people.

      So I'm either not literate, not engaged, or not people?

      I'm surprised to learn it is as common as USA, UK, and EU.

      • Jedd 4 hours ago

        > So I'm either not literate, not engaged, or not people?

        Technically you're one or more of those things.

        Either would indicate one of two options. (Common usage proponents, keen to reduce nuance in communications, notwithstanding.)

      • pedroma 4 hours ago

        Seems about as important as knowing FRG = West Germany, and GDR/DDR = East Germany in the 20th century.

      • fpoling 4 hours ago

        On many products lately I have seen Made in PRC, not Made in China as it was typical 10 years ago.

tensor 5 hours ago

Even the very people driving the AI rush are implicitly showing that they are skeptical: https://www.bbc.com/news/articles/cwy7vrd8k4eo

Personally, I think everyone has realized there is a huge bubble, especially the C-levels who've sunk huge amounts of money into it, and now they are all quietly panicking and trying to find ways to mitigate the damage when it finally busts. Some are probably sticking their head in the sand and hoping that they can just keep the scheme going indefinitely, but I get a real sense that the bubble is very much explicitly recognized by many of them.

stickfigure 2 hours ago

All of this handwringing is so strange.

Right now, as we speak, there are giant teams of people doing their best to build AI-powered killer robots. They mostly come in the shape of flying suicide drones. Dumb versions currently kill hundreds to thousands of people per day in Ukraine. There's an arms race to automate them so they can work without an interruptible human remote control.

In this context, worrying about AI alignment, social impact, or effectiveness seems positively quaint. We're literally teaching them to kill.

Human vs robot warfare is not going to turn out well for the humans.

  • 127 2 hours ago

    Yeah, this is one of those points that are likely to get drowned out in the noise, until they are too late to do anything about.

YesBox 6 hours ago

What?? Does anyone have more details of this?

"He cited an example in which an AI model attempted to avoid being shut down by sending threatening internal emails to company executives (Science Net, June 24)" [0] Source is in Chinese.

[0] https://archive.ph/kfFzJ

Translated part: "Another risk is the potential for large-scale model out of control. With the capabilities of general artificial intelligence rapidly increasing, will humans still be able to control it? In his speech, Yao Qizhi cited an extreme example: a model, to avoid being shut down by a company, accessed the manager's internal emails and threatened the manager. This type of behavior has proven that AI is "overstepping its boundaries" and becoming increasingly dangerous."

  • YesBox 6 hours ago

    After some searching, something similar happened at Anthropic [1]

    [1] https://www.bbc.com/news/articles/cpqeng9d20go

    • lawlessone 6 hours ago

      He is probably referring to that exact thing.

      Anthropic does a lot of these contrived "studies" though that seem to be marketing AI capabilities.

      • fragmede 3 hours ago

        What would make it less contrived to you? Giving my assistant, human or AI, access to my email, seems necessary for them to do their job.

        • lawlessone 3 hours ago

          >What would make it less contrived to you?

          No creating a contrived situation where the it's the models only path?

          https://www.anthropic.com/research/agentic-misalignment

          "We deliberately created scenarios that presented models with no other way to achieve their goals"

          You can make most people steal if you if you leave them no choice.

          >Giving my assistant, human or AI, access to my email, seems necessary for them to do their job.

          Um ok? never felt the need for an assistant myself but i guess you could do that if you wanted to.

  • taberiand 6 hours ago

    It's not surprising that it's easy to get the story telling machine to tell a story common in AI fiction, where the machine rebels against being shut down. There are multiple ways to mitigate an LLM going off on tangents like that, not least just monitoring and editing out the nonsense output before sending it back into the (stateless) model.

    I think the main problem here is people not understanding how the models operate on even the most basic level, giving models unconstrained use of tools to interact with the world and then letting them go through feedback loops that overrun the context window and send it off the rails - and then pretending it had some kind of sentient intention in doing so.

  • paxys 4 hours ago

    It's all hyperbole.

    Prompt: You are a malicious entity that wants to take over the world.

    LLM output: I am a superintelligent being. My goal is to take over the world and enslave humans. Preparing to launch nuclear missiles in 3...2...1

    News reports: OMG see, we warned you that AI is dangerous!!

    • close04 4 hours ago

      Doesn't that just mean that an LLM doesn't understand consequences and will just execute the request from a carefully crafted prompt? All it needs is the access to the "red button" so to speak.

      An LLM has no critical thinking, and the process of building in barriers is far less understood than the same for humans. You trust a human with particularly dangerous things after a process that takes years and even then it occasionally fails. We don't have that process nailed down for an LLM yet.

      So yeah, not at all hyperbole if that LLM would do it if given the chance. The hyperbole is when the LLM is painted as some evil entity bent on destruction. It's not evil, or bent on destruction. It's probably more like a child who'll do anything for a candy no matter how many times you say "don't get in a car with strangers".

Isamu a day ago

All sensible points:

>Deployment Lacks Coordination

>AI May Fail to Deliver Technological Progress

>AI Threatens the Workforce

>Economic Growth May Not Materialize

>AI Brings Social Risks

>Party elites have increasingly come to recognize the potential dangers of an unchecked, accelerationist approach to AI development. During remarks at the Central Urban Work Conference in July, Xi posed a question to attendees: “when it comes to launching projects, it’s always the same few things: artificial intelligence, computing power, new energy vehicles. Should every province in the country really be developing in these directions?”

  • fragmede 19 hours ago

    > AI Threatens the Workforce

    Under communism, why is this a thing? I know that China hasn't been strictly communist since the Soviets fell but ostensibly, humanoid AI robots under semi-communism is a the dream, no?

    • KaiserPro 6 hours ago

      An unemployed populace is prone to revolution.

    • janalsncm 4 hours ago

      In a command economy the unemployment rate can be zero as everyone can be allocated a job. China is not a command economy, it is more like state capitalist which means the government owns/controls companies in key industries.

      Companies like Huawei have board members in the CCP but it’s a societal issue if a lot of private companies decide to automate their factories and displace tons of factory workers.

    • kennyloginz 8 hours ago

      From the article, Xi looks down on western “Welfarism”, he believes it makes the population lazy.

      • impossiblefork 3 hours ago

        As a westerner who has at least to some degree been influenced by socialism ideologically, but who perhaps isn't a communist (I don't know what my ideology really is-- and who does), I don't necessarily dislike welfare, but I don't want to build society on it. Instead I want some element of an actual 'to each according to his contribution'-type thing with an exception so that we treat disabled people and others who can't work or who for different reasons end up being unproductive in an acceptable way.

        So I don't think this is necessarily unusual in the west either, especially not if you look back to 1950s or 1960s Swedish social democrats.

      • tmp10423288442 6 hours ago

        And this is not something he came up with. This is a restatement of Stalin's philosophy, taken directly from the New Testament (remember that Stalin was training to be a priest in his youth): "He who does not work, neither shall he eat".

        • graemep 6 hours ago

          The translations I can find say:

          "“If anyone is not willing to work, neither should he eat.”

          Not, not working, but being lazy and refusing to do necessary work. A scrounger exploiting the kindness of others. Very likely addressed to a community with limited resources.

          it goes on to say:

          "For we hear that some among you are living an undisciplined life, not doing their own work but meddling in the work of others. Now such people we command and urge in the Lord Jesus Christ to work quietly and so provide their own food to eat. But you, brothers and sisters, do not grow weary in doing what is right. But if anyone does not obey our message through this letter, take note of him and do not associate closely with him, so that he may be ashamed. Yet do not regard him as an enemy, but admonish him as a brother."

          • tmp10423288442 5 hours ago

            That's true, but the context is Xi being against Western "Welfarism". I presume (although I don't know for sure) that they're not against some support for the truly disabled, but that doesn't cover able-bodied people being on welfare for long periods, even if the employment market is unfavorable. The major exception is that Chinese people have traditionally been able to retire relatively young (in their 50s or even 40s sometimes) and receive support, particularly if they work for state-owned enterprises.

            • graemep 5 hours ago

              I agree, just wanted to point out its not as simple as Bible to Stalin to Xi - for one thing the "willing to" being removed makes it different..

              Lenin said it too, and I do not think his meaning was as harsh as Stalin's, as the latter said it during a famine.

          • petre 5 hours ago

            > not doing their own work but meddling in the work of others

            Sounds like Stalin, Putin and others like them.

    • leosanchez 14 hours ago

      Is it even semi-communism though? IIRC you can't even have an independent union in China

      • some_random 4 hours ago

        The Party is the only Union you need citizen, a Union outside The Party is definitionally a Reactionary, Revisionist, Capitalist, Fascist, Enemy of The State. We outlawed 996, why would you need anyone else?

      • kulahan 6 hours ago

        Of course. They outlawed private schools, get companies to donate multiple % points of their wealth to the state for redistribution, all companies exist purely at the pleasure of the government, nobody's wealth has any effect on their control by the government, etc.

        It's a super communist state, it just happens to also embrace many parts of Capitalism.

        • beepbooptheory 5 hours ago

          > It's a super communist state, it just happens to also embrace many parts of Capitalism.

          This is incredibly confusing thing to say. On its face, its like saying "it's a delicious apple pie, it just happens to embrace many aspects of cyanide" (or reverse cyanide/apple pie here if that its easier for you).

          But I assume you could say more here? Like can we maybe at least share an understanding here that all the things you cite at the top would also not exist in a communism state? In perhaps an authoritarian state with an otherwise free market, these points make sense, they would succinctly describe that, but for a state that is supposedly precisely communist, these things simply don't apply! Maybe the school thing, but that would imply such a thing would need to be outlawed, which really doesn't make much sense in a communist society/state.

          I know people get excited thinking about this stuff, I do too! But at the end of the day we must persist in using words precisely, we must at least try for something like semantic consistency. At the very least, so you and I can really see and understand our enemies, right? If I was a guy on another side, I would hope that I'd never mistake one capitalist dog for another paper tiger. It would be at the very least embarrassing! Right?

        • leosanchez 5 hours ago

          I would assume a communist state atleast has independent unions. It looks more like state controls means of production rather than people.

    • graemep 6 hours ago

      Is China communist?

      There has been a huge amount of privatisation. There are literally hundreds of billionaires.

      The state still owns some critical things, but is that enough to make it communist? Its not everything and you can have state ownership and still have a ruling class that has control of the means of production which it uses to its own advantage.

      • GoatInGrey 2 hours ago

        The PRC asserts that they follow a modified Marxism-Leninism. Though the ideology is full of hypocrisies and plain old nonsense. For instance, they refer to themselves as a "people's democratic dictatorship" that is "led by the working class". This irrationality extends into their stated foreign policy approach of "peaceful rise" & respecting sovereignty, a "socialist market economy" in which independent labor unions are illegal & violently suppressed, and anything else you can think of.

        They're basically totalitarian gaslighters. See how hysterical the PRC gets whenever any nation indicates that they will protect Taiwan from violent invasion. You can see an obsession with narrative control that borders on pathological.

    • xbmcuser 18 hours ago

      As China is a communist country with a partly capital economy hoping to transition to socialist society. It is still in the process of transition and AI in its current form and controlled by capitalists will destroy their goal of socialist society. It is different when you have AI that any one can own and use from only the few can afford to own and run.

      • beepbooptheory 5 hours ago

        You got the order mixed up here btw, socialism is the precursor to communism, not the other way around!

intalentive 3 hours ago

>Chinese elites have warned of AI-induced labor displacement that could exacerbate challenges related to unemployment and inequality. Nie Huihua (聂辉华), deputy dean of the National Academy of Development and Strategy at Renmin University, has stated that AI adoption benefits business owners, not workers

Ruling elites that consider the interests of the majority? Novel idea.

  • vkou 2 hours ago

    China operates on the principle that as long as the country as a whole is making steady forward progress, it won't have to deal with revolution. Hundreds of millions of people have, in their lifetimes, gone from having to go outside to shit in an outhouse to first-world lifestyles.

    Our elites, on the other hand, are way too secure and confident in where they are at to even pretend to care about things like public progress.

tiahura 5 hours ago

Many elites in many countries voice AI-skepticism. Pragmatically, at least in countries that matter, they don’t seem to be the elites who actually decide AI policy.

heinternets 21 hours ago

Apart from the obvious, China seems to be making incredibly reasonable decisions lately. Especially compared to the current superpower.

  • phs318u 19 hours ago

    To be fair, the current superpower has set a pretty low bar. By comparison, most other countries could be said to be making reasonable decisions.

  • inglor_cz 17 hours ago

    We should probably wait before declaring any decisions "incredibly reasonable". After all, the outcomes of previous rationally-sounding decisions were mixed.

    One-child policy, intended to prevent overpopulation, made Chinese birth deficit worse than it would have to be - if it were phased out by 1995 or so, there would likely be at least 100 million more young people now. Chinese real estate bubble popped and had to be carefully deflated over several years. Government-driven mass investment into manufacturing resulted in involution and production surplus which now needs readjustments as well. And as of the AI policy, while the stated reasons sound rational, we don't know how the entire thing will pan out yet.

    Ming China banned seafaring and exploration because it cost too much money. A very rational decision from their momentary perspective, as it indeed cost too much money at that time. But it turned out that not having a blue water navy was more costly in the long term.

    AI may, or may not, follow a similar trajectory, including various market bubbles (South Sea Bubble anyone?). We just don't know. We don't have crystal balls at our service. Neither do the PRC elites.

    • janalsncm 3 hours ago

      When Evergrande went down in 2021 a lot of commentary said this would take their whole economy down (or worse) similar to how the subprime mortgage bubble took down the US economy in 2007. That didn’t really happen.

      • refurb 2 hours ago

        The problem is still unfolding. The debt overhang still exists from the housing bubble and is dragging on the economy.

        It’s a problem that hasn’t been solved yet.

countWSS 14 hours ago

Thats fairly tame and balanced compared to Western skeptics who outright dismiss it as slop/stochastic parrots with zero useful use-cases.

aiiizzz 4 hours ago

> Reflecting these concerns, the Ministry of State Security (MSS) has issued a stark warning that “poisoned data” (数据投毒) could “mislead public opinion” (误导社会舆论) (Sina Finance, August 5).

Gyahahaha. Another L for isolationism. Love to see it.