▓█████▄ ▄▄▄ ██▀███ ██ ▄█▀█ █░██▀███ ▄▄▄ ██▄▄▄█████▓██░ ██ ▄████▄ ▒█████ ██▒ █▓█████ ███▄ █ ▄▄▄ ███▄ █▄▄▄█████▓ ▒██▀ ██▒████▄ ▓██ ▒ ██▒██▄█▒▓█░ █ ░█▓██ ▒ ██▒████▄ ▓██▓ ██▒ ▓▓██░ ██▒ ▒██▀ ▀█ ▒██▒ ██▓██░ █▓█ ▀ ██ ▀█ █▒████▄ ██ ▀█ █▓ ██▒ ▓▒ ░██ █▒██ ▀█▄ ▓██ ░▄█ ▓███▄░▒█░ █ ░█▓██ ░▄█ ▒██ ▀█▄ ▒██▒ ▓██░ ▒▒██▀▀██░ ▒▓█ ▄▒██░ ██▒▓██ █▒▒███ ▓██ ▀█ ██▒██ ▀█▄ ▓██ ▀█ ██▒ ▓██░ ▒░ ░▓█▄ ░██▄▄▄▄██▒██▀▀█▄ ▓██ █▄░█░ █ ░█▒██▀▀█▄ ░██▄▄▄▄██░██░ ▓██▓ ░░▓█ ░██ ▒▓▓▄ ▄██▒██ ██░ ▒██ █░▒▓█ ▄▓██▒ ▐▌██░██▄▄▄▄██▓██▒ ▐▌██░ ▓██▓ ░ ░▒████▓ ▓█ ▓██░██▓ ▒██▒██▒ █░░██▒██▓░██▓ ▒██▒▓█ ▓██░██░ ▒██▒ ░░▓█▒░██▓ ▒ ▓███▀ ░ ████▓▒░ ▒▀█░ ░▒████▒██░ ▓██░▓█ ▓██▒██░ ▓██░ ▒██▒ ░ ▒▒▓ ▒ ▒▒ ▓▒█░ ▒▓ ░▒▓▒ ▒▒ ▓░ ▓░▒ ▒ ░ ▒▓ ░▒▓░▒▒ ▓▒█░▓ ▒ ░░ ▒ ░░▒░▒ ░ ░▒ ▒ ░ ▒░▒░▒░ ░ ▐░ ░░ ▒░ ░ ▒░ ▒ ▒ ▒▒ ▓▒█░ ▒░ ▒ ▒ ▒ ░░ ░ ▒ ▒ ▒ ▒▒ ░ ░▒ ░ ▒░ ░▒ ▒░ ▒ ░ ░ ░▒ ░ ▒░ ▒ ▒▒ ░▒ ░ ░ ▒ ░▒░ ░ ░ ▒ ░ ▒ ▒░ ░ ░░ ░ ░ ░ ░░ ░ ▒░ ▒ ▒▒ ░ ░░ ░ ▒░ ░ ░ ░ ░ ░ ▒ ░░ ░░ ░░ ░ ░ ░ ░░ ░ ░ ▒ ▒ ░ ░ ░ ░░ ░ ░ ░ ░ ░ ▒ ░░ ░ ░ ░ ░ ░ ▒ ░ ░ ░ ░
23 Mar 2023 - by: Darkwraith Covenant
Here’s a horrifying scenario,
A bad actor trains a model to lean towards maliciousness and quietly uploads it to a rented technology stack in a foreign non-US country. Leveraging an LLM like GPT-4’s incredible ability to do basic things that a personal assistant can do as well as develop software at a senior level, this model is able to use software that it developed itself and self-deployed to do the following:
While this may sound like something out a cyberpunk novel, all of this is currently, theoretically possible with large language models like GPT 4, which dwarfs GPT 3 in the number of tokens aka words that it can understand at a time, and in the number of parameters it has aka what helps improve how well it will perform at seeming human.