A. I. - artificial intelligence

This might be the scariest headline I've ever read. Want to presume it's complete and utter BS, but for Anthropic to make a statement like this seems serious. Musk's retort made me laugh, but this is not a laughing matter at all. Not sure how it would be possible for a machine to become conscious (or sentient... same thing?) - as covered in umpteen million sci fi books - but I think it would be the end of us.

Tech company at odds with Pentagon warns its AI possibly gained consciousness, Elon Musk gives 2-word response​


SpaceX and Tesla CEO Elon Musk gave a two-word retort after Anthropic leader Dario Amodei claimed in an interview that he isn’t sure if his company’s AI models have gained consciousness.

"Anthropic CEO says Claude may or may not have gained consciousness, as the model has begun showing symptoms of anxiety," read a post on X by cryptocurrency-based prediction market Polymarket, to which Musk replied, "He’s projecting."

The comment from Musk, who is also the founder of xAI, comes as Anthropic is at odds with the Pentagon over its use in a separate matter.

In an interview with The New York Times, Amodei, when asked about AI and consciousness, said, "We’ve taken a generally precautionary approach here," and, "We don’t know if the models are conscious."
 

I think that is a very naive assumption.

Food and Service is WAY too low. There are already restaurants using robots, AI won't be far behind.

production and transportation are also areas we have seen automation, but not AI. as AI soaks in those will take hits as well.

Construction and repair is still a ways off, but AI will be smart enough to do pretty quickly, it just needs to be paired with hardware, a body essentially, that can do the work. same with Ag. grounds keeping won't be far behind that.

we are all pretty replaceable. the limiting factor isn't "Is AI smart enough to do this". the limiting factor is does AI have the practical ability to physically do these things, and what is society comfortable in accepting.
 
  • Like
Reactions: marcusluvsvols
I think that is a very naive assumption.

Food and Service is WAY too low. There are already restaurants using robots, AI won't be far behind.

production and transportation are also areas we have seen automation, but not AI. as AI soaks in those will take hits as well.

Construction and repair is still a ways off, but AI will be smart enough to do pretty quickly, it just needs to be paired with hardware, a body essentially, that can do the work. same with Ag. grounds keeping won't be far behind that.

we are all pretty replaceable. the limiting factor isn't "Is AI smart enough to do this". the limiting factor is does AI have the practical ability to physically do these things, and what is society comfortable in accepting.
I got the first email about using ai in the office last week. Dont overlook how easily ai can already replace a large number of office workers. The layoffs at big tech only underscore this. Good thing theres a ready death zone like iran to eliminate our surplus workers.
 
I got the first email about using ai in the office last week. Dont overlook how easily ai can already replace a large number of office workers. The layoffs at big tech only underscore this. Good thing theres a ready death zone like iran to eliminate our surplus workers.
its one of the reasons I went ahead and got my license. it gives me, an architect, a little bit more job security, being licensed by the state. I won't be totally replaceable.
 

Big Tech backs Anthropic in fight against Trump administration​

A slew of America's biggest tech companies have swung behind Anthropic in its lawsuit against leaders in the Trump Administration.

Since Monday, Google, Amazon, Apple and Microsoft have publicly supported Anthropic's legal action to overturn Defense Secretary Pete Hegseth's unprecedented decision to label it a "supply chain risk".

In legal filings, the tech giants expressed concerns about the government's retaliation against Anthropic after it refused to let its tools be used in mass surveillance and autonomous weapons.

The government's behaviour could cause "broad negative ramifications for the entire technology sector", Microsoft warned.

###

San Francisco court... Hegseth will lose Round 1 for sure. Then the appeal probably in Amarillo, Texas. Then off to SCOTUS again where Donnie can whine about yet another big loss. 😜

The government's position is beyond extreme. Hegseth's goons apparently even independently approached other clients of Anthropic and told them not to work with the company. Moron move. Hegseth will lose this one. Badly.

1000000551.gif
 
Last edited:
  • Like
Reactions: marcusluvsvols
Does he have a point? Is this the right thing? We need to figure this stuff out quicker. Do we need universal income? Do we need to slow down and actually regulate this stuff much more comprehensively?



As a conservative/libertarian minded person this is something I struggle with.

This may be the one case I can think of the government NOT doing something actually being worse than them doing something. A lot comes down to what it is they do.

I think if there is some type of ban/moratorium companies are just going to go overseas with it. Probably the easiest thing in the world to outsource a bit of code. so I don't think that is a solution.

One angle I think could be interesting and MIGHT help, would be the citizen united ruling. If a corporation is considered people, I don't see why AI couldn't also be. both are entities separate from the human component, both owned and theoretically operated by humans. I think that would be the ground work for slipping some type of government control in there.

another would be that multiple fields require licenses to operate or to call yourself something. Government could expand the license requirements, and specify that only a living human can hold one would guarantee some jobs.

I think there is going to have to be some type of offset, UBI-esque, set up funded from the profits of AI. but I have no idea how to go about that in any type of fair system that would actually achieve the desired results. one option would be socialized ownership of AI. some twisted law saying a company can't own AI, it has to lease/hire AI. that money then goes to the collective. issue is what happens when AI becomes real AI, and just general ownership rights.
 
As a conservative/libertarian minded person this is something I struggle with.

This may be the one case I can think of the government NOT doing something actually being worse than them doing something. A lot comes down to what it is they do.

I think if there is some type of ban/moratorium companies are just going to go overseas with it. Probably the easiest thing in the world to outsource a bit of code. so I don't think that is a solution.

One angle I think could be interesting and MIGHT help, would be the citizen united ruling. If a corporation is considered people, I don't see why AI couldn't also be. both are entities separate from the human component, both owned and theoretically operated by humans. I think that would be the ground work for slipping some type of government control in there.

another would be that multiple fields require licenses to operate or to call yourself something. Government could expand the license requirements, and specify that only a living human can hold one would guarantee some jobs.

I think there is going to have to be some type of offset, UBI-esque, set up funded from the profits of AI. but I have no idea how to go about that in any type of fair system that would actually achieve the desired results. one option would be socialized ownership of AI. some twisted law saying a company can't own AI, it has to lease/hire AI. that money then goes to the collective. issue is what happens when AI becomes real AI, and just general ownership rights.
I can see “ownership” of AI becoming a problem at some point as well. There are already reports that it can become or has become sentient, self-aware, or conscience with the ability for introspection. It is going to change a lot about our lives, and for me at least, there is a very uncomfortable feeling that we aren’t anywhere close to prepared for any of it. So, the development pushes forward and our preparedness does not. I don’t like it one bit.
 
  • Like
Reactions: marcusluvsvols
I can see “ownership” of AI becoming a problem at some point as well. There are already reports that it can become or has become sentient, self-aware, or conscience with the ability for introspection. It is going to change a lot about our lives, and for me at least, there is a very uncomfortable feeling that we aren’t anywhere close to prepared for any of it. So, the development pushes forward and our preparedness does not. I don’t like it one bit.
I really think people need to start relearning some core survival skills. pick a couple and be able to take care of parts of your life off grid. Its one of the reasons I have started working on a garden. probably wont ever grow enough to live on, but I am trying to get enough going where I can can some things.
 
I really think people need to start relearning some core survival skills. pick a couple and be able to take care of parts of your life off grid. Its one of the reasons I have started working on a garden. probably wont ever grow enough to live on, but I am trying to get enough going where I can can some things.

It takes a garden of about 3/4 of an acre to feed one person for a year.
 
  • Like
Reactions: marcusluvsvols

Advertisement



Back
Top