This post was originally the second half of Neural Networks, Lensa, and the Past, Present and Future of Computing, published on this blog in December 2022. The first half (which I recommend reading first) now focuses entirely on providing a technical overview of how AI works, whereas this part focuses on some of the ethical implications of AI.
Part 5: ethical implications. Or, is it wrong for me to use Lensa for my profile picture? One thing I have hinted at but haven't said explicitly is that a neural net needs a lot of data—the more, the better—to mold its neurons into the perfect set of weights and biases. Where does it get this data? That depends and is an article in itself, but as a simple thought experiment, try going on Google and running an image search for a topic of your choice. Then start right-clicking the images and clicking the "Save image as..." option and putting them in a folder. Now instead of doing that manually, imagine a computer program that automates exactly this process, saving thousands or more images a second. Gathering images, however, isn't really the point of controversy. If I downloaded a bunch of images from the internet, that's generally accepted as OK—maybe artists aren't totally comfortable with the idea, but this is behavior society has known about for a while and chosen not to prohibit. If, however, I then printed out those digital images, framed them, and sold them on a street-corner as "my" art, that would be crossing a line, both ethically and legally. But if I simply looked at the images, saw patterns in what works and what doesn't, then got out a brush and canvas and painted something of my own—not copying any specific image, but drawing inspiration from all of them—that's OK again. This last scenario is the closest familiar analogy I can think of for what generative AI does. So does that make it OK? Well...maybe? The process is abstractly defensible, but the analogy isn't perfect and ethics change with scale. If you take a process that people are a little squeamish about but have grudgingly accepted and then magnify it by a million, it's reasonable to say you are in different territory. Ultimately, it is not enough to rely on analogy or precedent here; it's up to us, right now, to decide if AI-generated art is ethical and whether it should be made illegal, based on its impact and our values. So let's talk about impact. The most obvious benefit of AI-driven apps like Lensa is the creation of satisfying imagery that otherwise would not exist; the most obvious damage is lost work for artists. To put it simply, is AI making the already difficult lives of artists even harder? This is complicated because the answer varies by context. I'll illustrate with a few examples. Some art is likely immune to advances in AI. Specifically, art that derives its value from the prestige of how it was created. As an analogy, an original painting is worth far more than a print, even if the print is exquisitely made, even indistinguishable from the original, because of the buyer's knowledge of how the image was created. Likewise, I would expect that a fair bit of art will continue to be valued for the human effort put into it, which by definition cannot be replaced by AI. On the other hand, a great deal of art is valued as a commodity. Promotional movie posters, video game graphics, website backgrounds, and so on all require the work of artists, but no one really cares how they were made. If an AI generates images that are good enough to meet any of these needs, every bit of that is paid work lost from artists. Then again, this isn't a total loss, because the money not given to artists is saved by the would-be customers. That said, my sympathies are definitely with the artists on this one—and I say this as someone who runs an indie game studio. Lastly, there is art that only exists because of AI and isn't replacing anything. Those custom profile images popping up on Facebook are a prime example. They are, for the most part, temporarily replacing selfies, which on a moral level, are about as "yeah, whatever" as it gets. Overall, this superficially follows the same path as every new automation since the Industrial Revolution: some jobs are lost, hurting some people while saving others money, some jobs stay safe, some new stuff is created, mostly things just change. There are, however, a few things that are different this time. First, while automation of the past typically eliminating the mind-numbing, repetitive, and otherwise awful jobs no one really wanted, neural-net based AI is disrupting some of the most creative forms of work that people aspire towards and would do for free if it was sustainable—and AI is arguably making it less so. Second, modern AI accelerates the pace of automation to a point where it may be very difficult for increasing numbers of people to keep up with the disruptions. More on this last point here: https://www.youtube.com/watch?v=7Pq-S557XQU. The last thing I feel needs to be said about the economic challenges of automation is that the solution is as obvious as it is inevitable. You all know what it is, so say it with me, "Universal Basic Income". We can have it now, we can have it later, or we can all starve. Some political choices are difficult, this is not one of them, so I won't waste any more time talking about it. Another point that comes up on the ethics of AI image generation is the idea that, by uploading pictures of yourself to apps like Lensa, you are giving up your personal data and feeding the algorithm. Maybe Lensa does this, maybe they don't. If it concerns you, read their Terms of Service. Yes or no, something like this is going to be in there. As a rule, companies will happily screw people over, but they are very, very squeamish about getting sued. Not including basic privacy information like this—or worse, promising safeguards and lying—would put them in a world of legal hurt for very little gain. In any case, this is one point I personally wouldn't be too worried about, for several reasons. First, companies like Lensa don't have much to gain by adding the pictures you upload to their database. Their image crawlers have already trolled THE ENTIRE INTERNET (this is only a slight exaggeration), a couple million extra images from people using the app is a drop in the bucket. Second, which is really the first point made more explicit, if you are updating an existing profile picture, they probably already had it before you used the app. Even if Facebook is diligently and respectfully safeguarding your privacy (ha!) and blocking access to your profile from external bots, your profile pictures are public, anyone can see them, including bots automatically right-click-saving them into a database. The purpose of uploading your image is to give the app a starting point to modify; any additional internal use, if there indeed are any, are inconsequential, again because they don't need your image and probably already have it. Third, Lensa isn't even the organization collecting the images. LAION does that. Nor did Lensa create the AI system that generates its images. That's Stable Diffusion. Lensa obtained access to Stable Diffusion's system, which trained itself by accessing LAION's dataset. So what did Lensa actually make? The short and oversimplified answer: they made an interface. To put that in layman's terms: you know how Lensa has, like, buttons and menus and junk and you can find it on the app store? They built that. Now, it's possible that they are making a few extra pennies by selling uploaded images back to LAION—again, check the Terms of Service if this concerns you—but for all the above reasons, this just isn't the angle I would be concerned about. When it comes to AI generally, and neural-net based art generators specifically, there is a lot to be concerned about: privacy violations in the short term, worker displacement in the medium term, and runaway processes that increasingly diverge from human values in the long term. For far too long, these concerns have been flying under the radar of popular consciousness. If Lensa's images serve as an inflection point that get people engaged in this topic, leading to societal changes that shift the way we relate to technology, that's a wonderful and long-needed development. There are, however, enough legitimate dangers that we don't need to imagine threats that don't exist. Part 6: ethical implications, continued. So...is using AI art ethically wrong or not? Answer! If you feel unsure about whether it is right to use Lensa for your profile picture, blog post, story you are writing as a hobby, or other use case that isn't directly taking an artist's job away but nonetheless benefits from a technology with morally grey impacts on the world, and the above discussion doesn't give you any practical clarity, ultimately this is something that only you can decide, because it comes down to your personal values. That said, I'll offer one creative solution that isn't perfect, but at least points in the direction of a win-win. Quantify how much the generated art is worth to you, based on how much you are gaining from it and your personal finances (basically, how much would be willing to pay at an auction), and then give that money to a struggling artist--or collective, or charity, or Kickstarter campaign, or ticket to a show that has no risk of selling out that you won't actually go to, whatever feels relevant. How much is up to you, maybe it's $50, maybe it's $5. If your answer is that you can't afford to pay anything...I call bullshit because you just spent $8 on Lensa. Here's the framing behind this answer. The reason using AI art feels icky is not because it's hurting anyone--again, if the art would otherwise not exist because its economic value is less than the minimum any artist would be willing to accept as payment, no job has been lost--it's because the action follows the pattern of an unhealthy social dynamic. Specifically, that of paying based on the minimum you are able to bargain for, rather than on an honest appraisal of what is fair, in a context where you have a disproportionate amount of bargaining power. This is the same dynamic many of us despise in capitalism at its worst: CEOs accepting incomprehensibly large paychecks while their employees are so poor as to need financial assistance from the government, the US healthcare system price-gouging patients because no one wants to bargain-shop when their life is on the line, and so on. Getting images for free that are similar to what a human artist would create as their life's work feels like you have been given an opportunity to step into the shoes of Jeff Bezos and chosen to make the same anti-social decisions that you have may criticized him for. The scale is smaller to the point of triviality, but the underlying social dynamic is similar enough that it feels like hypocrisy. Admittedly, my answer of compensating via charity is not perfect. For one thing, it requires a bit of mental reframing that might not come naturally. To some, it may sound suspiciously like penance, the religious practice of intentionally hurting oneself in order to absolve sinful behavior. I prefer to think of it more like putting aside one's bargaining power in favor of fairness. For example, in a recent job offer I gave to an artist for my upcoming game, Oscillarium, they asked for an hourly rate that I thought was selling themselves short and so I countered with something higher. I don't say this to make myself sound generous--there are others on the team that I bargained down to the minimum they would accept--but to illustrate the idea of making deals based on your view of what is fair, rather than always bargain-shopping for the best deal you can get. A deeper flaw in my answer is that it is a personal solution to a societal problem. That is, if it were to be relied on as Society's Path Forward in dealing with AI, I have zero faith that it would hold up at scale. A majority of people would simply take the cheap stuff and not pay anything (or grossly underpay), even if they were using it in a profitable context. I don't say this merely from a grim, Hobbesian view of human nature, but from having personally created a significant amount open source computer code as well as free educational videos and observing the economics of these endeavors directly. That said, I believe that paying to offset your use of AI generated art addresses the ethical issues on at least a personal level. It is also a means of "voting with your wallet" on a template for a broader, societal answer. I don't know what that bigger answer looks like...oh wait, that's not true, I know exactly what it looks like! One more time, all-together, loud enough so those in the back can hear: "Universal Basic Income"! Part 7: what's next? Some speculations based on patterns I have seen. Actually, they were speculations when I came up with them about three years ago when I was actively studying this stuff, but in the last year have started coming true. To be clear, I am not advocating any of these ideas, they are merely what I expect to happen. 1) GPUs instead of CPUs. The whole point of CPUs is to process instructions sequentially, allowing for predictable results that can in turn be the basis for more complex instructions. Neural nets, however, are composed of a large number of identical, rather simple parts acting simultaneously, which don't require all that much precision or predictability. Simulating this process in a sequential manner introduces a huge and unnecessary bottleneck. GPUs, in contrast, are perfectly suited for the task and could bring efficiencies on the order of shrinking a supercomputer into a chip and the power consumption of a small city down to a powerful light bulb. Update: this is happening but, as far as I know, not yet standard. 2) Feedback loops between vertical and horizontal systems. Neural nets are good at pattern recognition; traditional programs are good at sequential logic. Humans use both, simultaneously, and they play off each other. Pop culture often pits logic and intuition as at odds with each other, but really they are deeply complimentary processes. Intuition casts a wide net, perceiving things that logic misses; logic refines intuition's findings, casting out bad data and generally bumping us out of local minima. I predict that the next-generation of AI will likewise integrate traditional programming with neural networks, with neural nets driving the creation of code and code refining the direction of the networks. Update: the AI the effectively plays "Diplomacy" claims to do something like this, but I'm not sure how deep that goes. 3) Emotions! Headline-earning neural networks have a ridiculous number of neurons. That's fine, human brains have far more neurons, but in typical neural nets, every neuron is connected to every other neuron in the next layer, which is an exponentially increasing number of connections. Now suppose an AI is broken into multiple modules (such as in point 2) that need to communicate with each other. Again, as the modules get bigger, the number of connections increases exponentially. And again, that may be OK for a giant supercomputer, but what if you want to shrink an AI system down to something that can run on a single laptop, robot, or some other context with limited hardware? At that point, you need some really intense data compression. Biology had to deal with this same problem in creating our brains. Think about the flood of just visual information coming in through your eyes every second, now add to it all the other senses, now add to that everything else happening in our brains—thoughts, memories, motor control—how do we handle all that in less than 2 liters of grey matter without the whole thing frying like an egg? My theory? Massive, massive data compression—sensory information gets compressed into complex emotional states, then stored as memories, and then decompressed into thoughts and ideas that can be acted on. This comes at a cost, all that compression inevitably loses information, and from that loss we suffer from biases and blind spots. Overall, that's a fantastic trade for our purposes, as those biases are specifically tuned to be in situations that are not likely to happen, but unfortunately become rather problematic when other people learn about our cognitive shortcuts and intentionally exploit them with things like gambling. Will a similar dynamic occur with AI of the future? Very possibly. In one of the Unity game engine's Machine Learning tests, they rewarded agents for discovering novelty as a way to encourage them to fully explore their environment before getting locked into a local minima...then discovered those agents would get mesmerized by walls of static. Rewarding novelty isn't exactly the same as an emotion like curiosity, but it is the kind of high-level adaptive shortcut I am talking about. Update: this one is totally speculative, I don't know of anyone building emotions into AI...but if biology thinks emotions are so useful, it seems to me like only a matter of time before engineers see the light and come to the same conclusion. A common thread in all of these hypothetical changes, as well as the innovations of the past, is the processes involved in AI are becoming increasingly similar to human thought. But this is not a universal. In terms of capabilities, it makes intuitive sense that what works for biology should work for computer science, such that the two gradually converge. There is no reason, however, to assume that future AI agents will have anything even remotely approximating human values such as the desire for social connection, comfort, mental stimulation, or (more negatively) social dominance, unless such desires are deliberately added. Science fiction of the past has cultivated an image of future AI as a hyper-logical version of humanity, far surpassing us in calculation, but ever-lacking in creativity and passion. If the trajectory of real AI continues, however, it won't look anything like that at all. The best pop culture point of reference I can think of are aliens. Not familiar aliens like the warlike Klingons or peaceful Na'vi, whose narrative purpose is to reflect aspects of humanity back to us, but the truly alien aliens of the more imaginative sci fi sources like the first appearance of the Borg in Star Trek (before they got watered down), or the heptopods from Arrival, beings whose deepest motives and ways of thinking are so inscrutable to us that we have no idea what to expect. A final note: I am not an AI evangelist. I love technology in general, especially 3D printing and VR, but I find neural networks terrifying to the point where I believe we should be doing everything we can to slow capabilities progress on them down, at least until we can get a handle on what is really going on inside all those neurons. Seriously, an entire branch of technology whose core premise is the absence of human control? How can this possibly go well for us?! Maybe I'm just a luddite programmer, but when the code you are writing just happens to "work" (sort of), you don't know why, and it occasionally does weird and undesirable things like turning into a Nazi, applying racist loan policies, or inserting surprise child porn into an interactive text adventure, you don't wait for a catastrophe to strike. It's broken now, it needs to be fixed, and if that means stopping development while there's a complete overhaul of the entire system, starting from an entirely different perspective, so be it.
0 Comments
Leave a Reply. |
Archives
April 2024
Articles
AI, from Transistors to ChatGPT Ethical Implications of AI Art What is Alignment? Learned Altruism Superintelligence Soon? AI is Probably Sentient Extinction is the Default Outcome AI Danger Trajectories What if Alignment is not Enough? Interview with Vanessa Kosoy Unity Gridworlds Fixing Facebook A Hogwarts Guide to Citizenship Black Box |