To be clear here, the answer to Eldoran's question of 'why would AI kill us' is a more technical sub-point. It's related to, but distinct from, whether we should be worried about the immediate future of AI development, and *that's* what I meant to address with the FLI open letter.This is miles and miles away from what generally is talked about when we talk about AI uprising.
Sure, when the broader public and media discuss AI issues people stick pictures of the Terminator on everything, and that's ridiculous. That doesn't mean there aren't real, fairly immediate problems that form the basis for discussions by professionals and subject-matter experts.
After all, if you're living in certain parts of the world, an AI *really might kill you* in the depressingly near future, depending on what regulations are imposed on the development and use of autonomous weapons internationally...