Making Better episode 6: Jim Fruchterman #a11y

Jim Fruchterman is a leading social entrepreneur, a MacArthur Fellow, a recipient of the Skoll Award for Social Entrepreneurship, and a Distinguished Alumnus of Caltech. Jim believes that technology has the power to improve—even transform—the lives of people around the world. As Founder and CEO of Benetech, he focused on bringing Silicon Valley’s technology innovations to all of humanity, not just the richest five percent. He is a former rocket engineer who also founded two successful for-profit technology companies. Under Jim’s leadership, Benetech created and scaled multiple software for social good enterprises spanning education, human rights, and environmental conservation. Jim has recently founded a new nonprofit, Tech Matters, to provide strategic technology services that maximize impact, not profit. Jim is active on Twitter as @JimFruchterman

As with all of our episodes, this one also is accompanied by a transcript so that everyone can enjoy it. Click here to read a transcript of Episode 6.

Episode 6: Jim Fruchterman

Making Better—Jim Fruchterman

(Music) Welcome to the Making Better podcast, interviewing some of the world’s finest thinkers about a more optimistic future. Now, here are your hosts, Chris Hofstader and Dr. Francis DiDonato.

Chris: Well, Francis, we’re up to episode 6 of Making Better!

Francis: Yes, and this is a really great episode.

Chris: This episode is Jim Fruchterman. He’s McArthur Genius Fellow, he’s one of the people who invented modern machine recognition based optical character recognition (OCR), he’s a social entrepreneur and a leader in the social entrepreneurship field, and while I respect an awful lot of people in the world, Jim is one of the very few whom I truly admire. This episode has a few audio glitches in it, we use a program called Zoom to record, and we had a few internet hiccups, but we hope you enjoy the episode.

Francis: (hiccups) That was a hiccup.


Chris: Jim Fruchterman, welcome to Making Better!

Jim: Thanks a lot Chris, glad to be here.

Chris: So you have a long and varied career doing all kinds of things, but always with a social conscience aspect to it. So if you could just give us a bit about your background, that’d be a great way to get started.

Jim: Well sure. Well, basically I’m a nerd. You know, I started doing computer programming in the early 70s, I went to Cal Tech, which is kind of nerd mecca, and so I was always interested in technology and science and figuring things out—and it was never quite clear what that career was going to be, but I thought I’d either be an astronaut or a professor. So that was kind of the track that I was on in college and when I started grad school. The connection I had to solving social problems was in college, I was in a class, a modern optics class, and we were learning how to make optical pattern recognition things. And because it was the 70s, and pretty much all the jobs were in the military-industrial complex, our professor was using the example of how you could essentially get a smart missile with a camera in its nose, and have a computer that had a representation of the target—could be a tank, or a bridge—and the idea is that you’d fire your missile, it would look around in the world until it spotted its target, lock on, zoom in and blow it up. So I had to do a project for this class, and I was going back after the lecture going, ‘I wonder if there’s a more socially beneficial application of this.” And then I got my one good idea in college, which was, hey, maybe you could make a reading machine for the blind. Maybe instead of recognizing tanks, you could recognize letters and words and speak them aloud. So, the next day I went back to my professor with a lot of enthusiasm, and he explained that someone actually had used this kind of technology to do pattern recognition on words; matter of fact it was I think the National Security Agency was using it to sort through Soviet faxes that they had intercepted. And they were having too many faxes, so if they could spot the word, like “nuclear weapon” in Russian, they would actually route that to a human..a human analyst to actually review. And I said, oh, great, so it’s already been built—how much does it cost? He said, uh, I think it’s millions of dollars per installation…which took a little of the air out of my, you know, reading machine for the blind tires. But it lay the groundwork for some of the future things that happened. So after finishing my masters at CalTech, I went to Stanford to start a PhD, and—Stanford is in the middle of Silicon Valley, and this was a pretty exciting time in Silicon Valley’s history, and so I and a couple of other engineering grad students started an entrepreneurship talk series in our dorm. And our first speaker had started a PC company named for our dorm, and the second speaker was the president of a private rocket company. And, I’d always wanted to be an astronaut, I’d even gotten an interview with, with the people at NASA in Texas, and so I said, ah great! So I took a leave of absence from my PhD program, joined the rocket project as their chief electrical engineer, built all their electronic systems, and the rocket actually blew up on the launch pad. So that was a bit of a disappointment…and I went back to Silicon Valley with my boss from the rocket project, and we tried to start our own rocket company, we tried to raise $200 million. No one gave us $200 million, and then my boss—now partner—said, hey, I know this guy who’s a chip designer at HP and he wants to start a company to design a custom chip that will something really cool. And I said, what’s he got in mind? He said, I don’t know, let’s go have dinner with him. So we went and had dinner with this guy, and he described how he wanted to make a chip that could take in light and recognize letters and words. ..Wow..that’s like my good idea from college, you could help blind people read with that. And so that became the start of a company that was originally called Palantir and then changed its name to Calera, and made essentially the first omni-font character recognition technology that worked without being trained. And as we started the company, we became more aware that Ray Kurtzweil had invented a OCR system and a reading machine before we had, and we, you know, raised a bunch of money from Silicon Valley venture capitalists to compete with their character recognition product. Long story short, it was one of the early machine learning companies in Silicon Valley, our particular breakthrough was, we took millions of examples of characters and trained an algorithm in how to recognize those characters, and it worked really well. Company built up, sold a lot of products to insurance and law firms and the government and, you know, those were sort of our main commercial markets, but the dream of making a reading machine for the blind was still there, and I still didn’t know any blind people, but I just imagined they could use this. And so we built a secret prototype, based on our commercial character recognition product, that was connected over a serial cable to a PC that had a first-generation *tracks voice synthesizer in it. And we demonstrated this to our board of directors, and it worked, it scanned the page and it read it aloud, and my board was, you know, excited, the product demo, you know, there’s a new potential product, and they said, Jim, you’re the VP of marketing, how big is the market for reading machines for the blind? And I said, well, we think Kurtzweil is selling about a million dollars a year, now that they’ve been acquired by Xerox….a relatively awkward pause occurred in the board meeting, and they said, well, but we’ve invested $25 million in this company, what’s the connection between a million dollar market and that? And I said, oh, it would be great PR, the employees are really excited bout it, our customers will be proud of us…and they’re like, no, you know, you’re only $15 million and year and you’re supposed to be $30 million a year in revenue by now. You’re missing plan, you’re not making money, we’re not going to allow you to distract the company to launch a new product to help blind people, because it doesn’t make enough money. And they were, you know, right from a business standpoint, wrong from a social and moral standpoint—so that’s kind of what caused to…to launch out of, sort of, the traditional Silicon Valley tech world and into the assistive technology and nonprofit world.

Chris: And that’s when you founded Arkenstone.

Jim: That’s right. So, so I went to, after the board vetoed it, I went to…the board vetoed the project because they didn’t want to distract the company. [*] said well, you could start your own nonprofit. And I said, what do you mean, nonprofit? He said, well, you don’t think there’s any money in this…I said, no…he said, I can give you pro bono help to start a charity, and you’d be essentially a tech nonprofit. I kind of giggled, ‘cause, as you know, I said well gee, I you know, i’ve been associated with an accidentally non-profit tech company, you know…gee, maybe if you’re a non-profit tech company you’re like…successful by definition if you lose money! (laughs) And so, that was the start of Arkenstone, and the idea was, because the market was so small, if we could make a break-even, you know, half a million, million dollar a year venture, it would be a big success. And Arkenstone became the only high-tech company I’ve ever been associated with that actually beat its plan. I think within three years we were $5 million a year, and making reading machines for the blind as an enterprise, and breaking even—and that’s how Arkenstone actually went into the reading machine for the blind business and my old company was perfectly happy for me to do it, just as long as I did it outside of the company as a customer. They gave me a really big discount, they gave me extended credit, but—as long as I wasn’t distracting the team from making money, they didn’t have an objection for me doing it. And basically, I got a pretty sweet deal in exchange for a noncompete and no-hire agreement from my old company.

Chris: And that would go on to become what’s now Benetech, a nonprofit with a much broader set of goals and agenda. Why don’t you tell us about Benetech…

Jim: So Arkenstone got started in 1989, got to $5 million a year, and as time went on, you know, we would keep cutting prices, more people would be able to afford the product, our revenue stayed about the same. We were always break-even. We created a new product, we created a talking GPS for the blind called Strider, but it didn’t make enough money and we were short of money, and Mike May, who was then our VP of Sales, was kind of our core user of that product, he ended up spinning out of Benetech and starting Sendero Group to make talking GPS. But I was basically struggling with the fact that running a break-even social venture meant that I had no extra money, and the fact that we kind of had to shut down Strider or spin it off was basically an indicator that break-even was great as far as it went. So after about ten years, I got…I was kind of getting bored, I had all these ideas for other things that we could do to help blind people, to do stuff [for human rights], then the guy who started what became Freedom Scientific, Dick Chandler, came over and said, hey, I want to buy Arkenstone from you. And well, it doesn’t belong to me, it’s a charity, go away! So, he came back a couple of months later and he said, Jim, why don’t you tell me what your aspirations are? Hmm. This turns out to be a negotiating ploy, but as a nerd I didn’t really recognize it as such, and I said, well, I have all these dreams of doing, you know, other things for blind people, I want to do human rights,..and he said, tell you what, I’ll give you $5 million to your nonprofit, buy the assets of Arkenstone and merge it into Henter-Joyce with its JAWS product and Blaize Engineering with their Braille and Speak product, and you know, we’ll create this new company, and you can stay in the nonprofit with your engineering team, rent the engineering team back to us for a year, and then go off and start new projects. So that’s…but they also bought the Arkenstone name, so we had to change the name of the nonprofit from Arkenstone to Benetech. And we did our year of work on the next version of all the products that we had sold to Freedom Scientific, and then we had the ability to go off and look at a whole bunch of new projects. And so what we did is, we had about $5 million from Freedom Scientific, which could not go into my pocket—that’s illegal, it’s a charity—we raised another between $4 and $5 million from big silicon valley donors, especially Skoll and Omidyar are the two key people behind the creation of eBay. We had $9 million, we looked a hundred ideas, we invested in 20 ideas, and four of them became nonprofit social enterprise products that went on to change their field. So the one that is really well known in the blindless field is Bookshare, but we also started a first big data group human rights movement, we started the first software for capturing human rights data so that the information wouldn’t be lost, we created an environmental project management package, and plus there are a lot of other projects that we tried that didn’t take off, which is pretty much the Silicon Valley way, it’s just that in every case we’re not looking to make money—‘cause we’re nonprofit—it’s how can we help the most people while breaking even. And that formula turns out to, not only, you know, worked after we sold to Freedom Scientific, but it’s continued to work to this day, and we’ll always have lots of cool tech for good projects in our hopper, and are busy trying to figure out how to, which ones will take off, and then scale them up and make an impact.

Chris: And your business plan for Benetech received the Charles Schwab recognition as the best business plan for a nonprofit that year…

Jim: We got a lot of recognition. It was Klaus Schwab, who was the founder of the World Economic Forum, the Davos people, who gave us the Social Entrepreneur award and our Bookshare business plan won, I think, runner-up in the Yale business plan competition, which was the first social [*] business plan competition…and then we won the Skoll award. Even though we were pretty early on, once we made that transition to Benetech, we started to get a lot of attention, because the social entrepreneurship field—this idea of using innovation and entrepreneurship to help solve social problems—really started taking off in the early 2000s. Because we’d been doing it for a dozen years, we were seen as one of the founders of that movement.

Francis: Why is it that you won these awards, what specifically did you do best?

Jim: I think the unusual thing about us was, we bucked the Silicon Valley greed-profit-seeking motivation, that often leads Silicon Valley to do some kind of nasty things. We said, look, no, we’re setting up to be in the public good. So I think the reason that people got excited and recognized us was the idea of an exciting, Silicon Valley startup company that had chosen to be a charity, to be a nonprofit, and to focus on doing social good, kind of flew in the face of how people regarded Silicon Valley, which was make money at all costs, and kill yourself along the way. I think that was one reason, I think the other reason was that the things we we were doing were really understandable. Many tech companies have great products that the average human being cannot understand why this middleware company exists. We were helping blind people read, you know, we were the Napster of books, when it came to Bookshare. We were helping document human rights and helping convict genocidal generals of genocide. I mean, this I think captured more people’s imagination that technology could be used deliberately for good rather than occasionally evil by accident, which is certainly the story of big parts of the tech industry today.

Chris: I like the phrase, the “G-mafia,” it stands for Google, Microsoft, Apple, Facebook, IBM and Amazon as the evils of, ah…artificial intelligence these days.

Jim: Basically, we were using what was considered at the time “artificial intelligence” to actually help blind people read. And AI has that potential to do good, it’s just right now I think we’re at a point where a lot of people are applying machine learning/AI in very sloppy ways…and hurting a lot of people, because they’re just ignoring many of the things we know about things like statistics. Anyway, we can come back to that, but I’m spending a lot more time kind of helping, not only the disability community, but other minority communities understand some of the threat that AI, badly applied, actually poses to their interests.

Francis: One of the things that I found really troubling during, like, the 80s and 90s even, was the religion of the free market, where like the free market could solve every problem. One of the things that I’ve seen, especially in research, is that when there isn’t a lot of money to be gained, it’s hard to get funding and it’s hard to get things up and running a lot of the time. Say, for example, with rare diseases, that kind of thing, you know we have this situation now where nonprofits kind of pick up the slack, but it seems to me that there’s like an inefficiency to it all because, for example, my girlfriend has a nonprofit, and she spends half her time or more raising funds. Is this model, in your view, working? Is there…other ways to approach it on a larger scale? Is government maybe supposed to play more of a role?

Jim: Well, the short answer is yes. People often try to say, Jim, why aren’t you a for-profit? Can you be making a lot more money as a for-profit? And the answer is…yeah, but we want to work on social problems. And many social problems are directly connected to a market failure. The reason that Arkenstone, you know, the original name of Benetech, got started was because our investors said, “no, that doesn’t make enough money, don’t do it.” And so, now, the free market religion usually goes the extra step of saying, if it doesn’t make a lot of money, it’s a bad idea. And that’s the idea that I reject and a lot of other people reject, which is, wait a minute, if you think that, then you’re going to consign 95% [men who need] to never getting the benefits of most of this cool technology we’ve created. And the great thing about technology is, the marginal cost of a new piece of software, a new chunk of content, is next to nothing. So as long as you can see your way clear to actually working on this thing that’s not an exciting market, you can do an amazing amount of good for almost no money. Bookshare is an example of, we promise any student in the US that needs a book, we’d already have it, we’ll go get it and add it to our library. And now the library has 700,000 books and about 700,000 users—that runs for $10 million a year. On less than another $10 million a year, we can solve the problem for the whole darn planet, which by the way is a fraction of what the planet spends on library services for people with disabilities. Its leverage is terrific. The other part of your question, which is about what model is there? So, there’s basically two models; one model is, you encourage the creation of nonprofit social enterprises, like Arkenstone or Benetech, and now there are several hundred of them. So, we were alone back in those days, and the other people who were starting similar things at the same time, we didn’t know about each other, I didn’t know that this thing existed for the first ten years. And so now we know that we’re a field, the people who create the technology—the companies, the academics, the authors, the publishers, whatever it might be—they’re often quite generous with access to their intellectual property to help more of humanity. So we’re actually able to get our hands on this stuff. So I think that that is a good model. The nonprofit sector is never going to be as scalable, as efficient, as something that actually makes money. So if you can do social good and make money, I encourage people to do that. But I think your last point is, what about regulation—I think it is possible to take some of the worst things about industry and change some of those things by legislation. And the one that a lot of people with disabilities are familiar with, it’s actually against the law to discriminate against people with disabilities. Now, we all know that doesn’t stop it from happening, but on the average, it makes it harder, and as people lose more and more lawsuits, they do more and more to avoid getting, you know, caught in a lawsuit, we actually move the ball forward. So I think that both of those models are important, and I think that it’s clear that we have a need for more nonprofit social enterprises, and we also have a need for more government regulation to remedy some of the excesses of the market, some of the negative social consequences that come from—you know, whether it’s over pollution and climate change, or it’s discrimination through AI—these are all things that need some attention, in my opinion.

Chris: Can you speak more to some of the human rights and the non-disability related things that you work on at Benetech?

Jim: Oh sure. I think the great thing about the sort of transition at Benetech was, it gave us some money to respond to some of these needs. And so the story of Benetech has been, starting from base in technology serving people with disabilities (which by the way is still more than 2/3rds of what Benetech does), but we’ve been able to do quite a number of projects for other parts, other social issues. So one of the big questions we had is, how can we help prevent atrocities in the developing world, on human rights violations? And you know, we thought about it a lot and the best thing we could come up with was, what if you could capture and not lose the testimony of, you know, people who survived human rights abuses, that witnessed them, and so t hat started a very long sequence of work at Benetech—it’s been going on now for over 15 years—of supporting the human rights movement. Because frankly, if you think about the human rights movement, the only thing it has is information. I mean, activists and information are their only assets. And so, we found out that the majority of these stories, these truths, were getting lost—groups were going out of business, they were getting…their offices were burned, their computers were stolen—so the idea is, hey, let’s capture those stories, let’s back them up into the cloud, so that they’re not lost, and then if we get a big pile of data, then we essentially had a big data group that would actually analyze these patterns and so testify in genocide trials, identify patterns of basically who did what to whom, and be part of the support for the outnumbered human rights activists. And so we got involved in a lot of the large-scale human rights violations, you know, civil wars and conflicts, we helped really understand what the numbers are and political science, that was kind of unusual to actually be asserting things based on data rather than opinion. And we had lots and lots of data in a lot of these civil wars and conflicts, from many , many different groups. And so we were able to do this, we moved into the LGBT community in Africa, after they were under a lot of threat of capital punishment for being gay, and certain African countries was being floated as a law, helped groups write the first police violence against gay people and their country kind of reports, and leading to change, and often we’re helping the UN, so…so for example right now, our biggest project in this area is that there’s between five and ten million videos that possibly include information about atrocities in the Syrian civil war, and we’re writing machine learning AI algorithms to help basically go from $5 or $10 million videos that you might want to look at, which no person can actually do, down to maybe the 500 or 1,000 videos that might be relevant to preparing a case against people who launch chemical munitions. We’re not a human rights group, we’re the nerds and the scientists who help make the human rights movement more powerful. That’s one example, another example is the environmental field came to us, about a dozen years ago, and said to us the state of the art for project management in the environmental field at the time was an excel spreadsheet, and that—you know, it’s another case of the market failure that we talked about, is construction had fifty different project management packages, depending on what you were constructing. But people who were running, you know, wetlands restorations, or campaigns against environmentally bad practices, were stuck with an Excel spreadsheet. And a general tool like Microsoft project was way too complex for your average biologist or activist, so we wrote something called [morati]. We jokingly called it TurboTax for the environmental activist professional—the idea is that it would ask you a set of questions, kind of a “wizard,” and come with an explanation of how a dollar in, like more salmon or cleaner air or whatever it might be—and so that project has gone on to be, you know, the leading project management package in the environmental movement. The list kind of goes on, I mean, we’re doing a ton of stuff in assistive technology, we started the Diagram Center, which is all about how to make STEM and STEAM; and the whole idea of Diagram was, why doesn’t everyone in the field get together, build shared technology and shared standards as a common effort, given that we all care about making science and technology, engineering and math, and arts content more accessible. And so that’s a great example, and then of course some, the one that I think is very exciting is the woman who took over as CEO of Benetech from me in…late last year, Betsy Bowman, who’s been at Benetech for 10 years, she helped get the “Born Accessible” campaign launched, and the goal of “Born Accessible” is to eventually put Bookshare out of business. The idea is that if we can convince the publishers to create their mainstream e-books completely accessibly, then blind and other people with disabilities related to print can just get the standard e-book and it should work great. And that’s increasingly the case, and we’re hoping to do that for more and more complicated works so that, eventually, the need for something like Bookshare will peak and people will start relying on mainstream e-books to be able to read what they need to read.

Chris: And what do you see for the future, what ideas are out there that you haven’t started on that you would really be enthusiastic about doing next?

Jim: So I’m actually having a blast with actually not being the CEO of Benetech, and the great thing is, I think Benetech is going to continue to expand and have ever-greater impact under the new leadership, and frankly it was probably, after about 30 years, probably time for someone to take over Benetech. And the roadmap that Benetech has is pretty clear—you know, we worked closely with the World Blind Union, and other blindness organizations, especially the NFB here in the United States, to get the Marakesh treaty passed globally, to get the US to ratify it, Europe has ratified it…I think the goal is, is that even as we work on the Born Accessible movement here in the US, to reduce the need for Bookshare, I think that there are millions of people around the world for whom access to books is, you know, ten or twenty years behind where we are in the US. And so I think the Marakesh treaty is going to let us, over time, become the national library, be where Bookshare was 15 years ago in the US. I think right now Bookshare is the national library, free national library in easily a dozen countries already. So I think that roadmap is kind of set, obviously Benetech is going to go off and do more and more stuff in human rights. Benetech’s also started doing stuff in health and human services, they have a project called Servicenet to make information about health and human services a lot more available to people who need them, because that field is stuck kind of in the “yellow pages” era of information management. So Benetech is off and running those things. I’ve launched a new social enterprise called Tech Matters, as of January. It’s actually physically sponsored by Benetech, so we still have a connection, but the difference is that if I don’t raise money for Tech Matters, then I don’t get paid, so Benetech’s not on the hook for paying my salary. And now I’m working on a whole fresh set of social problems that Benetech hasn’t had the bandwidth to work on. So I”m working with the global movement of child helplines, so these are the people who, you know, in many countries take the phone call from a kid in crisis or someone who sees a kid being abused, and helping them update their technology platform to do a better job of helping potentially a hundred million kids around the world in the next few years. I’m working on fighting slavery in the supply chain, basically unethical labor practices, I have a next-generation environmental project that’s going to help essentially regions figure out what to do about climate change and the environment and matching conservation up with livelihoods and agriculture and all that sort of stuff. So the common thread to everything I get to do today is, someone has a social problem that they want to solve, they’ve got a group of nonprofits or government agencies, or for-profits, that want to work together on solving that, and I get to be their nerd. I get to help imagine what technology products or standards or glue might help unlock the potential of these people to help solve this big social problem. And that’s, frankly, it’s a blast. I’m, you know, fresh challenges, lots of people who are very dedicated to the community that they serve, and I get to make their tools or help see that they have the best possible tools for the job.

Francis: I think what you’re doing speaks to an enormous need in society today, where there are all these technological solutions, and maybe potential for creativity with what we have, and it’s not really being discussed even as a choice for society. I think that the role that you’re playing is one that we need on such a larger scale.

Jim: I’m glad to say that I’m part of the growing movement, because a lot of people see the same problem that you see with technology. And I see it most obvious in the universities, both among faculty and students. I think that computer science faculty—to pick a group—are very concerned about what they’ve helped create, which are basically, in many cases, technology companies that are, if not immoral, are certainly amoral, and often are kind of clueless about the negative social impacts that they have. Quite a number of universities have started programs that go by a bunch of different labels; one label is “public interest technology,” the idea that people might want to work on essentially using technology to help solve social problems and serving in the public interest rather than in the private interest. There’s people who work on “computer science plus” problems; so, how could computer science help ag, how could computer science help human rights, how can computer science help education. So there is a movement here, and I know that, for example Stanford just announced a major university wide program to try and engage their faculty, who are pretty high-powered, and their students, into actually working on social problems rather than on just relentlessly spinning off, you know, the next Silicon Valley unicorn company. So, I like the think this is going, and of course there is a, you know, very exciting stuff going on in the—especially in the last administration—that has continued, under the Trump administration, which is, you know reforming how the federal government uses technology to better serve people. Government agencies realize that they’ve done a really bad job of serving society, I think the Healthcare.gov fiasco of a few years ago was kind of, you know, one of the low points and I know that a lot of people are trying to make sure that technology actually works better for, let’s say, veterans or people who are on Social Security, and hopefully we will see more progress in those areas, which touch an awful lot of Americans.

Francis: Well if you want to talk about waste, what are we going to do about this defense budget, and the amount of resources…I mean, all that power that could be used for good. I know this is sort of one of those out of left field questions, but you can’t, you know, get away with saying, hey, I’m not a rocket scientist because actually at one point you were.

Jim: (chuckles) Obviously I got started by hearing about a military application of technology and thinking about a social application of that same technology. And the good news is, you know, Silicon Valley got its start almost exclusively in defense industry applications, and the story of Silicon Valley over the nearly 40 years that I’ve been here, has been a steady move away from being focused as much on military applications of technology to applications that help society, and we see this in the giant protests at companies like Google about their technology being used, say, to target people for assassination with drones. There’s an awful lot of people in the tech field who did not go into the tech field to build technology that did that sort of thing, and talent ultimately is one of the biggest factors in what goes on in tech companies. And so many tech companies are going to have to pay attention to how their technology is being used, and I think that we’re going through a period right now, you know, we’re highlighting not only how technology is being applied to military things, but also how the technology is being applied to, whether it’s enabled bullying, or thrown elections, or whatever it might be, I think people are beginning to grapple with some of these social impacts that got ignored during the go-go phase of the last, especially 20 years of the growth of the internet.

Francis: I love the idea of being a nerd. I mean, I consider that like a pretty high compliment in my world…you know, you think that maybe this country would want to try a nerd for President, I think what we have now is like as opposite a nerd as you could possibly be.

Chris: I think Michael Dukakis was that candidate, and we…he didn’t do very well.

Jim: Yeah, I know, my sister was saying “will you please run for president” and I’m like—naah.

Chris: C’mon Jim, everyone else is…why not?

Jim: I think that one of the biggest concerns expressed by nerds, especially nerd philosophers and nerd thinkers, is that the technology that we’ve created has undercut, kind of, respect for technology, for science, for fact. And they lay that at the door of essentially what are our social media, sort of world has created is that, by the way that a Facebook or YouTube makes money is for people to stay on their site longer, and they’ve learned that the way to get people to stay on their site longer is to feed t hem more and more outrageous things to cause them to get angry, or get sad, or—to basically appeal to their lowest emotions. The problem with things that are false is that they are more engaging. Essentially, Silicon Valley has created this giant engine to sort of stupidify the average person who uses their products, because it’s in their economic interest for you to get more and more false information because it’s more engaging. ‘Cause, you know, whether it’s clickbait or a false claim, those things get a lot more attention—i.e., more people spending more time on the site—than things that happen to be true. I think there are people who are very, very worried about this, and this might, you know, come back to some of this regulation that so many people in Silicon Valley object to regulation, but they’re like systematically destroying respect for science, respect for truth, respect for institutions…by creating a tool that relentlessly destroys those things in their economic interest.

Chris: There was a study published recently, I think it was out of a university in the UK, that showed that if you start YouTube with a brand-new account, completely fresh, you know, Google doesn’t know anything about you so it doesn’t know what to recommend, and you do your first search on the US House of Representatives, and then just watch each video that it recommends to follow next, and within eight to ten videos you’re going to be on something promoting the flat earth theory.

Jim: Yeah. Or Alex Jones and Infowars, or something else like that—yes. I was actually reading a book, actually entitled “Zucked” which is by an early advisor to Facebook’s Mark Zuckerberg, who actually says it’s like three or four things, but, we tend to end up there because the algorithms encourage it. And AI, really good at doing whatever task you set it to, and if the task is “have people spend more time on the website and click more and look at more ads,” that’s why you end up with stuff that’s false. But I’m on the techno-optimist side. I think that most tech people want to be creating things of value, and do not want to be associated with things that are evil by accident, or now, one might argue after it’s been pointed out enough, evil on purpose—and so I think that my goal is to keep putting the idea that it is possible to make a living doing technology for social good. It may not be the best path to becoming a billionaire, it’s a pretty bad path if becoming a billionaire is your objective. You know, there’s an awful lot of people who want to live a life that they can actually be proud of and work on things that they’re actually proud of, and as demonstration of that there’s a lot of great teachers and a lot of great people in many professions that help people in spite of the fact that they don’t make as much money. I want to get more of the tech field to channel itself into this, how can we do good on purpose? How can actually set out to maximize human utility—making people’s lives better? Because I think that is ultimately what drew a lot of us to being nerds.

Francis: We had a guest, Richard Stallman, on recently, who had this really great idea I thought, which is to have a progressive tax on corporations based on their size. Basically what that would do was, it would make it so, you know, at a certain point it just doesn’t make sense to get any bigger, and you know the idea that that would ultimately create a more diverse and robust economy.

Jim: And of course Richard, you know, founded the free software movement, which really influenced these more community values—and we are giant fans of free software, we’re also giant fans of open source software, which I know Richard’s not crazy about. But I’m not as much of an economist, I’m how we actually choose to solve this problem. I do believe that whether it’s income inequality or abuses of tech platforms, that we’re going to see more regulatory activity, we’re going to see more changes in tax, but ultimately it depends on the electorate deciding that they want those changes. And it’ll be interesting to see if we can get that consensus, because clearly we haven’t necessarily been moving in that direction lately.

Chris: As we just discussed, AI’s driving people to increasingly faulty and useless information, people are more and more likely to be misinformed.

Jim: Yeah, and sometimes it’s much more subtle than that. I did a major study last year for a major disability donor, and they asked me to look at what technology might help if the goal was to greatly increase the number of people with disabilities who had employment. Many of us on this call know about assistive technology and other ways that you could make people in employment more effective and more likely to be able to get a job or keep a job, because they have tools at hand to do it. The thing that blew my mind—and maybe it shouldn’t have—was essentially technologies taking over the recruiting and the hiring process in almost all large corporations and many small and medium corporations. And Artificial Intelligence, machine learning, has been applied to every single step in that process, and in many cases, the way machine learning has been applied is egregiously discriminatory against people with disabilities. Which, one would think, is against the law in this country, but that doesn’t stop people from buying this technology or applying it, because the people who sell the technology say, “our machine learning, it doesn’t see gender, it doesn’t see race, it doesn’t see disability” and yet the way they’ve implemented these things can’t help but discriminate. And I think that we’ve, you know, we’re part again of a movement of calling out these technologies and saying “how is it possible that that technology doesn’t illegally discriminate against people”…and I expect, actually, that we’re going to have to have disability rights attorneys suing companies over buying machine learning tools that discriminate against people with disabilities, and eventually people will have to actually correct this. But some of these companies are going to become, you know, very rich before anyone actually calls them to account for the fact that they built something that extensively discriminates against people with disabilities.

Chris: About a year ago, I wrote a blog article called “Can an AI be Racist?” and I based it entirely on what Apple suggests in my Favorites playlist every Tuesday, that comes out, if you use Apple Music…and every week they recommend 25 songs that you’d like to listen to, you know, from your own library and put it together as a Favorites mix. And literally for weeks and weeks on end, the top dozen were all white artists, and the bottom thirteen were all black artists. And Jimi Hendricks was always the borderline. So for some reason, Apple Music prioritizes white artists over black artists, and my record collection’s probably 75% minority.

Jim: Yeah, it’s really fascinating how that works out, isn’t it?

Chris: Yeah, but it surprised me after a few weeks in a row, when I started following it.

Jim: This is an issue that’s getting a lot of attention, and people are ..one of the things that people are trying to do, one of them is make the people who work on machine learning a much more diverse crowd, so that the kind of oversight that might lead to the kind of outcome you describe is less likely to happen if you have a more diverse group of people working on it, going “gee, this result seems very odd.” But in many cases we don’t have people who work on these tools that see these obvious problems. And of course, I think gender discrimination problem is the one that’s the most obvious, and people have the most awareness that it’s a problem. There’s a famous story about Amazon killing an automated resume screening tool because they could not keep it from discriminating against women. And if it’s that hard to stop something from discriminating against women, imagine how hard it is to stop it from discriminating against minorities or people with disabilities.

Francis: I think that’s really a fascinating line of thought, because it circles back a little bit to what we were talking about earlier, where in, you know, like a capitalist free market system, you know there’s going to be like certain things that just don’t get attention that really need attention. I wonder if there’s some kind of connection there.

Jim: The example that I use in this report is a company called Hire Vue (and they spell it, you know, like hiring people and View like v-u-e I think). What they do is create a screening tool, that they show you a video and you record yourself answering that video, and then a machine learning algorithm analyzes your facial movements and your voice tone and your word choice, and decides whether you are the one in five people who do this who get an interview with a human being. So they screen out 80% of all people. And I think we can all imagine many, many different kinds of disabilities that might get in the way of using this, from accessibility problems with the app as itself, actually pointing the camera at the right spot…and then the question is, well, how many people with disabilities were in the training set that they used to create this “scientifically validated” thing? And I’m guessing not a lot of blind people were in their training set. So, you know, it’s both the algorithm, what they’re collecting, has tremendous discriminatory capabilities. What if someone can’t speak, what if someone has a stroke and half their face doesn’t move, what if they’re from a culture that discourages obvious show of emotion…I mean, all these things that are discriminatory and then you have the training set, and how it was trained, and I’m guessing that it did not reflect a diverse population that included lots of minorities and people with disabilities. And yet, this tool is going, it’s out there and being used all over the place, and one of my favorite geeks, the guy who actually was like the head of our human rights program for almost ten years, he said “something that you all should watch out for is, when the customer for a machine learning tool and the people who build the machine learning tool…if neither of them suffer any consequences when the machine learning tool gets something wrong, you’ve got a case of moral hazard.” And this is classic example of, the company is saving money, that bought the tool, the company that sold the tool is making money, and the fact that they might egregiously discriminate against people with disabilities…who’s suffering? People with disabilities, not people who are in the middle of this transaction. This is the core of the problem that we’re having, essentially with the new generation of technology, is that the people who are engaging in the financial transactions are actually not the people who suffer the consequences of the decision, right? The users of Facebook, people posting on Facebook, they’re being commoditized and product-ized to get a free tool, but, you know, it’s Facebook and their advertisers that are making all the money. This problem just keeps resurfacing, that we’ve now moved to a market where the traditional “I am the seller, and I’m getting for you, and your the purchaser, and we’re the only dynamic”…Silicon Valley has, in many cases has moved to this dynamic where the person who actually uses the product is the product, and I think we’ve all heard that kind of claim, but it shows up in this sort of thing where the people who suffer the consequences of it going wrong aren’t actually making the decisions about how to build the product or actually how to pay for it.

Chris: And how do you see a path to disrupting that?

Jim: I mean, we’ve talked about the two paths that are there, which is, you know, starting nonprofit social enterprises that actually focus on doing good with the technology, and regulation to curb the most excessive abuses by the for-profit world. I’m not naive, it’s…it’s not going to be possible for nonprofit social enterprise to displace Facebook. I don’t take that as a very serious option. But I find that many of the technologies, or the things that we’ve come to understand from these technologies and these successful companies can be applied to deliberately doing social good. Obviously doing pattern recognition to help blind people read books was the one that started my career. You know, right now we’re trying to figure out how could you use machine learning to better prosecute war criminals, in the Syrian context. The project we’re working on around, sort of, large scale environmental stuff, how could we be using machine learning to better model erosion and water retention in regions that are going through land degradation and desertification. The same tools can be applied to these things, and I think that a lot of this is intent. We need to get more people intent on doing social good with technology, to create value that is not purely privatized, that actually keeps in mind the impact this has on society, and then we need regulation that actually makes it difficult for people to go out and just ruthlessly exploit people, which they actually have a great habit of doing.

Francis: Another things that I think is a big flaw in our society today, in regards to its relationship to new technology, is basically how the workday has gotten more and more intense when the actual amount of labor it takes to sustain a quality of life for the world has gone down. I was wondering if you could speak to that at all?

Jim: There are people that are working on this, and different groups have tackled different parts of it. So, there’s a guy who came out of the tech industry who started a movement called “Time well spent.” And the idea is that these tech tools have stolen a lot of our attention, and that has caused our human relationships to actually suffer, because these things are designed to be addictive, more addictive and overcoming many of our self-governing mechanisms, you know, why we should spend more time with our family, for example. And so they’ve influenced new features on the latest generation of iPhones and Android phones, actually spend more time with tools that help you keep from looking at email up until when you go to bed, creating more awareness of, gee I spent 40 hours last week playing this online game, maybe that’s actually not what I want to do. We also have some things going on in terms of social norming and regulation, and the Europeans are further along on this, whether it’s with much stricter privacy requirements and antitrust requirements, and actually fining companies like Google billions of dollars for violating those things. There’s also, I know there are European companies that turn off email in the evenings so that their employees can’t work on company email after a certain point, after six o’clock at night, or before eight o’clock in the morning, whatever it might be. One of my favorite books on this subject is from Tim O’Reilly, the guy who coined the term “opensource” and has been a big leader in the tech field for a long time, he wrong a book called WTF, which when he gave a talk on it in the Obama White House got sanitized to “What the Future.”

Chris: That’s funny, ‘cause President Obama was on Marc Maron’s podcast called WTF, and he expands it to be the largest philosophical question of our time, “What the Fuck?”

Jim: Yeah, exactly. Well anyway, it’s good to know that President Obama’s up for this in multiple dimensions, but Tim’s book—I mean, there’s a lot of exciting stuff there about where he sees the future going. But one of his biggest points is, we get to choose—you know, people often in the tech industry present this as an inevitable form of, you know, it must be this way. Data wants this. Business just works this way. And that’s not actually true. As a society, we can choose to prioritize privacy more, or prohibit some of the most abuses of our data, or whatever it might be. And so I think the question is putting, sort of, society back in charge of making some of these choices, either by informally what they choose to do and not do, but also what their legislators do in terms of regulating industry.

Chris: Changing gears then, what is it about technology you are most optimistic about, looking to long term future? Like, you know, where do you see us in 20, 30, 40, 50 years if your optimistic vision of technology happens?

Jim: I think that there’s some things that technology can do for us that will make lives better. So let’s say that we have some agreement on what a better life is. Or whether that’s just more autonomy to make choices about their life. So we could imagine technology helping solve the climate change issue, or taking some of the extreme impacts of climate change off. We can imagine technology and access to information being such that education becomes more effective, that the rights of women and minorities and people with disabilities have a greater level of respect. Obviously there’s tremendous stuff going on in the medical area. So if our goal is to reduce human suffering, to improve the quality of life for people, to give people more autonomy and more choices in how their lives unfold, that communities can make choices about how they want development or industrialization or conservation to be pursued in their communities—I see every single social issue that we face, there’s a lot of people working on that issue that want to make a dent in it, and I see technology as an indispensable tool in helping realize those visions of a better, more just, healthier, greener planet, whatever it might be. And so, you know, that’s what makes my job really so much fun. It’s if someone sits down with me and says here’s a social problem, and here’s the better world that we can imagine, it’s not hard for me to come up with five exciting technology ideas that might help contribute to that, a couple of which are bad ideas, a couple of which are probably great ideas—I don’t know which are which right now, but it’s not hard to figure that out, and that’s what I get to spend my time on, and I know that there’s an awful lot of people coming out of the tech industry who would like to be doing that kind of work, and I want to that just more normal, more sustainable, more of a career choice that more people with tech skills can actually pursue.

Francis: How would you recommend to someone who is hearing this that wants to change careers now? What would be the first steps for someone like that?

Jim: You know, you’re most powerful when you bring skills to bear, and so if you are mid-career, and there’s a lot of things that people have learned in their career that might apply to doing social good, right? I mean, I think my background as a tech entrepreneur and a machine learning guy actually turned out to be pretty darned handy to a lot of things that I ended up doing, leading Benetech, and now Tech Matters. And so if you are early in your career, I mean I often advising people, saying, what are you really good at? Get better at it, get some experience…some people come out of school and go into the nonprofit sector. And I think that that is increasingly a career option, but I’m also aware that the way our economic system works, often people come out of school with so much debt they have to go, and go to a job that makes more money. But I think that, I see people coming to the kind of work that Benetech does, whether it’s fresh out of school, early career, mid-career, late career, final sort of phase in your career—people at every step along that way are actually saying, I want to move from money to meaning, is sort of one of my catchphrases. And I think that there are…if you have a skill that’s actually applicable to these kind of social good applications. And so there are, there’s a lot of meat out there. It just doesn’t happen to pay, you know, $500,000 a year.

Chris: But you can make a reasonable salary in the non-profit sector. Because of some research I’ve done recently, I know the salaries of an awful lot of people working in the non-profit sector in the blindness space, and they’re making a living wage.

Jim: Yeah. And yes, I mean, the CEO’s generally not making, you know, billions or hundreds of millions or tens of millions, or generally not millions, but we can get people who are working for big tech companies to come join us. They often take a significant pay cut, but you know we can still pay more than $100,000 a year to a software developer with a lot of experience, even if they can make more than $200,000 or $300,000 at some of these tech companies. I mean the average salary at Facebook is like $250,000 a year.

Chris: But that doesn’t include contractors, and they have a ton of contractors there.

Jim: No, they do have a pretty deft way of pushing those people off payroll. But you come fresh out of school from an elite school, and you get paid an awful lot money to go to these companies. But again, if you’re not about profit-maximization but working on something that you really care about, yes, you can make a decent living and…we have lots of people who work for Benetech who work around the country and take advantage of the flexibility of working from home. Many of our employees with disabilities are actually working from home rather than living in a very high-cost area like Silicon Valley that doesn’t have great transit. They can take a different place and do a great job, because frankly, given the kind of technology we have today, your online presence doesn’t look much different whether you’re in Chicago or New York or down the hall here in Palo Alto.

Francis: One of the things that I try to do in this show is create almost like a brainstorming kind of moment, at times where you’re maybe even like theorizing about how the future could be and how…either what technology you would predict, or importantly, what technology that isn’t being used right now that if implemented, could make huge changes for the better.

Jim: I tend to be more practical…even…

Chris: This gives you the chance to step into speculative fiction.

Jim: (laugh) Alright, this is my Atwood moment.

Francis: This is the show where, that we’re up for that.

Jim: The thing that really excites me about technology and the directions that we’re going with sensors, with better medical technology, with better data collection technology, with better machine learning technology, is that the idea that we would be able to understand social problems at a far more detailed level, is very exciting and a bit terrifying, right? So the challenge to us going ahead is, how could we use knowing everything we possibly might want to know about a social challenge, and then using the technology to do social good while still respecting the privacy and human rights of the people who involved. And so, I think that that is the essence of what I want to see going forward. I just love sitting down and saying, imagine we know everything, now what will we do? Because the way that we’re going, that’s actually a realistic assumption for tackling some of these problems, thanks to the incredible sensing infrastructure, data infrastructure, that we already have, that could be applied not so much to making money, but instead making life better on the planet.

Chris: Is there anything you’d like to promote or plug, whether it’s something you’re working on or something that other people are working on?

Jim: I think the thing that I want to “plug” is that people should get more involved in using technology explicitly to do social good. I think that it’s something that’s in many people’s hearts, and they feel like it’s like almost they don’t have permission to do that…I want to give people permission to go out and find a way to make a living while doing really great things through technology.

Chris: Excellent. And with that, thank you so much for coming on Making Better, Jim.

Jim: I’m glad to be part of it. Thanks.

—END

f

Making Better Episode 5: M. E. Thomas

Episode 5: M.E Thomas Transcript

(music)
Welcome to the Making Better podcast, interviewing some of the world’s finest thinkers about a more optimistic future. Now, here are your hosts, Chris Hofstader and Dr. Francis DiDonato.

Francis: Hey Chris.

Chris: Hey, Francis.

Francis: So today I’m very excited, we have author M.E. Thomas, who wrote Confessions of a Sociopath, A Life Hiding in Plain Sight. When you hear the term “sociopath,” what comes to mind?

Chris: I would think it’s somebody who’s either a criminal, highly manipulative, or…generally evil. But that was before we had our conversation with M.E., and a lot changed in my mind in that hour.

Francis: So with that in mind, why don’t we just jump straight into the interview today.


Francis: Hello! I’m really delighted to have M.E. on today. So welcome to the show…

M.E. Thank you.

Francis: Just to start, for people aren’t…haven’t read your book, or…would you like to introduce yourself, your background?

M.E. Yeah, sure. So I, I guess starting at the very beginning, I’m a westerner, grew up in California. I’m Mormon, the church actually is trying to get us to say the full name of the church, so I’m a member of the Church of Jesus Christ of Latter Day Saints. I was born into it, raised into it. I have pioneer ancestors. I’m a musician, I majored in music and then I went to law school, and I happen to be a diagnosed sociopath, or anti-social personality disorder.

Francis: I guess what I’d want to start with is just maybe your thoughts about neuro-diversity…and just like the spectrum, even among sociopathy and how it’s generalized.

M.E. OK. It’s an interesting question for me, because I didn’t even realize that I was different, or how I was different, until kind of a later age. So I’ll just quickly walk you through that. Like, when I was a child I always thought that I was different but I mostly thought that maybe I was just smart. I am pretty smart, like in the way that I perform well on standardized tests, or whatever. There were just kind of things, you know, that other children didn’t see that I saw, and vice versa, honestly. I just had different blind spots than other people. So I mostly just grew up with a different perspective. It wasn’t until I was in law school, and I was sharing an office space with this other law student—we were both law students doing a clerkship at the time—and we were really bored, because they honestly only had like five to ten hours of work for us to do a week, but we had still be there. And during that time, we just talked about everything. She was really interesting, you know, she had a degree in theology, she was gay, though…and had grown up kind of during this era in which it wasn’t OK to be gay, so a little bit older. And after, you know, weeks of talking about basically everything, she said you might want to consider the possibility that you’re a sociopath. So I looked it up and I thought, wow, this really fits.

But I didn’t think of it as a disability at the time, I didn’t really think much of it at the time, which is probably kind of typical sociopathic reaction, to not really care about personal details in my life. And it wasn’t until maybe, it was about five years later that I was having some difficulties in my life. I had work problems, getting fired or kind of let go from a job, or relationship problems, long term friendship problems, romantic relationship problems…and all these things falling apart at the same time. And I started to notice a pattern, I started to notice, hey, this is not the first time this had happened. And in fact it happens about every three years. So to that extent I started thinking maybe there’s something to this sociopathy thing, maybe I should look into it, and maybe this is..this is somehow contributing to the fact that I, every three years, kind of have this cyclical self-destructive way of living. And I thought, too, you know, I’m getting older. When I was younger it was fine to self-destruct like that, because it’s more expected, people…other people were doing it, you know, it’s fine to total your car, you know, everybody does it kind of once in their teenage years. But you can’t kind of keep doing that as you become an adult. It was starting to become apparent to me, you know, it was starting to have more lasting consequences for me. And so I looked into it, I started a blog, which was sort of a journal I guess of researching different aspects about sociopathy. And the other reason I started the blog was I thought, you know, the stuff that I read, kind of online at the time—this was like 2008—so this is…there isn’t a lot written online, but the stuff that you did read was pretty negative, almost uniformly negative. And it said it wasn’t treatable, and it said, you know, there’s nothing that we can do for these people, and they’re just, you know, essentially a plague on society. And the best that society can hope for is being able to identify sociopaths and essentially set them up on an island together, you know, to live out their lives. Or people talk about, you know, just round ‘em up and kill ‘em, that sort of stuff. And I thought, that’s not been my experience. You know, I found out a lot of the research on sociopathy has been done on male prisoners. I don’t think, to this day, there’s research been done that either looks directly at the female population, or includes a significant female population. 

So then, you know there’s just, like, a lot that we don’t know about sociopathy. I think there’s a lot that we don’t know about disabilities in general, and the role that they kind of play in society, and the different things that people can do with a brain that’s wired differently. And I think it’s kind of…myopic to think, you know, there’s an optimal human. It’s like, what is optimal today is not necessarily going to be optimal tomorrow, etc. In fact I worked for a nonprofit, sometimes, do some work for them, legal work, and they quote this, I forget what the exact statistic…something like 60% of jobs that kids today will be doing when they’re adults don’t even exist yet. So they’re kind of suggesting, you know, like, we can’t just train people, we can’t take ‘em at eight years old and say, ok, this is the job that you’re going to be when you’re an adult, and we’re going to give you the education and training necessary to, you know, optimize that particular performance of that job, because we don’t know that that exists. So I think that sort of suggests that there’s a little bit of hubris in thinking that there’s going to be an optimal human, and thinking that the circumstances that we currently live in are going to be the circumstances of tomorrow, essentially. So I guess I think, in terms of neuro-diversity, you’re basically hedging your bets as a society. I think there’s the moral, kind of, reason for why we are interested in neuro-diversity, which is that all human life has value, you know. If you think that, then naturally you’re going to be on the side of neuro-diversity, but I think there’s also a good strategic way of thinking about neuro-diversity, which is that there’s sometimes situations in which you want  a certain type of personality or a skillset or experience; the more diverse it is, the greater chance society has of surviving. Just a quick funny kind of example: so one of my friends recently had a birthday party for stepchildren. And she does not get along with the ex-wife, right? And the ex-wife was going to be coming to her house, and the ex-wife before has come to her house and given her the silent treatment, and like treated her like trash. So she’s like, you know “I really want you to come, M.E., to this birthday party”…essentially—maybe you guys can kind of guess—to talk to the ex-wife, to put her down in subtle ways, you know, to basically check any attempts that she had about, you know, these little micro-agressions, and to, like, you know, instill my own—her own sense of fear about, you know, messing with this situation. Unfortunately I wasn’t able to come…

Chris: I’ve been invited to parties just to be the asshole..

M.E. Right. But being an asshole sometimes is terrible, you know, in social situations, and sometimes it’s really, really useful. I think most people can kind of understand, you know, you’re in a terrible work situation, or you’re, you know, you’re in a terrible like public situation, like one time—same friend—we were dining, we were eating Thai food, really one of these kind of hole-in-the-wall places with eight tables. We happened to be seated right next to these other two women. And then halfway through their meal, they turned over and said to my friend—and I think they had, like they had every moral justification in the world to say this, but they said—“you know what? you have a really annoying voice, and your voice has been annoying me this entire time that you’ve been talking during dinner. You have ruined this meal for me.” And …

Francis: Oh my god.

M.E. I know! (laughing) I went off on those people! I’m just, you know, not afraid of confrontation, you know, I just—I don’t even think I was rude. I didn’t yell, I didn’t get emotional, I just, you know, cut them to the quick,” that’s unacceptable, please leave now. Now they’re faced with this situation where they thought—who knows what they thought when they said that? They thought she was going to stop talking, maybe, or apologize, or whatever it is. But it’s just, you know, sometimes it is, like you say, it pays to be an asshole, or it pays to have an asshole around, I guess.

Francis: As long as you’re not a dick.

M.E. Right. (laugh)

Francis: The dick thing I don’t know if scientists have come up with whether it’s a nature vs. nurture thing, but in the absence of having a moral …a morality that’s universal and religiously based, I think a lot of people have whittled it down now to just, “don’t be a dick,” is sort of like enough for society to function with.

Chris: “Wheaton’s Law” comes up so often….

M.E. What is Wheaton’s Law?

Chris: Don’t Be A Dick.

M.E. Oh, ok.

Chris: It’s Will Wheaton, the actor from Star Trek. He did a presentation at a conference a number of years ago, just called “Don’t Be A Dick.”

M.E. (laughs) What is his definition of being a dick? I’m curious, because in this idea of, like, making things better, that you are all, are interested in, I often wonder, you know, what is the acceptable kind of neutral behavior, you know? And like you say, in the absence of having like a religious morality or, you know, in this world in which we’re trying to kind of look for that neutral morality, well what does it mean to be a dick? Like what are the things we can say, you know, never do this behavior, this behavior is not acceptable. Since we’re saying that so much behavior is acceptable in certain circumstances.

Chris: One of the things we in the humanist community are constantly wrestling with is how to prove to people that we can be “good without God” and how we actually do have an ethical framework and things like that. So when we’re encountering somebody with what we would believe to be irrational beliefs—you know, somebody who believes in a flat earth or somebody who denies climate change, or—you’re not going to convince people to become critical thinkers by being a dick. You know, don’t go up to them and say, “you’re stupid.” Maybe try to engage in a conversation with them about why they believe in what they believe, and maybe try to point out the logical fallacies and hope that they come around to our side.

M.E. Right. Yeah, I can see a, one thing that I have been thinking about recently, you know, and has like kind of a religious kind of coloring to it, the belief, it definitely comes from a religious perspective, but I think it’s interesting when you think about, like, the idea of how can we have a happier society, like a sort of utopia type society? Religiously, that’s what we think of heaven. Right? Heaven, and the Mormons are interesting, they have like a very practical idea, I think, about heaven. They’re almost kind of like science fiction-y the way they view heaven. They think of heaven as being just like a place where people are much more advanced, right? You’re much more advanced to the point where you’re, you’re essentially like, god-like. Right? And how do you have a society in which people have such great powers? You know, they’re very kind of god-like, but you don’t end up with wars and destruction. You know, you don’t end up with them getting so advanced, so powerful, that one person can sort of like set off a chain reaction and destroy everything. And in Mormonism, the concept is, you cannot interfere with other people’s agency. That is the one sort of “don’t be dick” rule, is that no matter what you do you have to allow the other person freedom. Right? You can’t impede on their own autonomy, I guess. So whatever you do, it can’t affect their own ability to do whatever it is t hat they want, or to be essentially unmolested. You know, like you have to make sure that all your actions and choices don’t affect others, or if they do affect others, that there’s some sort of consent. That you’re not infringing on somebody’s agency that way, which I think is a really interesting concept, and it’s a concept that I feel like has been…popular before, maybe, you know, there have been some movements in which it’s been popular, like, just “Don’t Tread on Me.” Right? Or even the Tea Party movement, which I think first started off with being, you know, like live and let live, and then kind of twisted to different things.

Chris: I hear similar ethical frameworks described, certainly by Unitarians, where belief in a god is optional but an ethical lifestyle is the most important thing. And I hear it at humanist events, really often, is, you know, in our “good without god” is, you know, here’s an ethical framework. It reminds me a lot of how you described your relationship with Mormonism in the book, in that it provides, you know, a set of guidelines for living and also gives you the rational reasons behind them.

M.E. Right. Exactly. It’s interesting that you say these things about ethics, but what…what are the rational reasons? You know, like, there are different…I guess if you are a, you know, sort of utilitarianism, right? Then there’s going to be a different kind of outcome that you’re looking for. Right? And that’s where the ethics come in, is that we’re looking for the outcome that allows for, you know, the best ultimate utility, right? And I guess Mormonism, the interesting kind of twist they have on it is this, this real focus on agency and autonomy. “Cause in Mormonism they have a belief, actually, that there was a war in heaven, and it’s interesting kind of thinking about where we are socially. You know, World War I, I think a lot of people don’t really understand what were the causes of that war. It seemed like people were just eager to go to war, right? World War II, there’s a greater kind of understanding of what the causes were, because they related to the aftereffects of World War I, which were so terrible. And now we have nuclear bombs, and I think the concern is—and I was just watching a Rick and Morty episode (laugh) where they reference this—you know, you’re going to get so advanced that you’re basically going to be able to bomb yourself out of existence. Right? The concern is that, you know, we have gotten so advanced that we have the real capacity to harm ourselves irreparably, as a society. That’s kind of the concern, and I guess the ethics, especially if you are going to talk about a utopia that would be so advanced, that would be a major concern. How can we avoid having these sorts of wars, or major conflict happen. And I think the simple Mormon answer, but it kind of makes sense to me logically, too, is that you just, you can’t interfere with other people’s agency. Like if somebody wants to—and this is the thing that I think is so hard for people to kind of get behind, today–is to allow somebody to be a racist. You know, this idea, and I talk about it actually in my classes that I teach, paralegal classes, is it OK to punch a Nazi? I would have thought the answer, from anybody who had, like, any sense of ethics, would be “no.” It’s never OK, like the ends don’t justify the means, right? Even if they’re a Nazi, you still, you still can’t punch them. Right? We have a legal system, right, where we hold people accountable in various ways, but you’re not a vigilante, you know. We don’t believe in vigilanteism, that’s why we have the legal system that we do, and all of its checks and balances, right? And the police system that we do, and all these things are intended to get to this point where we’re not just punching the wrong people, I guess, or you know, like people aren’t taking it upon themselves to inflict violence on others. But even rather well-meaning people, I hear them say, “yeah, it’s OK to punch a Nazi,” that’s an OK thing. And part of it I think comes from this idea that, you know, if you don’t fight bad things, you know, and fight them just as dirty, you know, as they’re trying to fight against you, then there’s going to be something that…bad happening. But I think a lot of it comes from just this feeling that you don’t like Nazis. You know, you don’t like Nazis, you don’t approve of the way that they are, you don’t agree with their beliefs, and you have a hard time existing in a world with Nazis. It makes you angry, you know, makes these people angry to live in a world with Nazis, and it offends them personally, too. I’ve heard people, and you know, there’s kind of a funny joke people say, you know, on Twitter, I guess a little bit of a meme, where they’ll say, you know I’m personally offended. Somebody says, you know, I eat my McDonald’s french fries this way, with the ketchup, and you know that’s disgusting, and they’re “oh, I’m personally offended.” You know? They’re…and it’s a joke, right? It’s meant to be a joke, but it’s also kind of true-ish, that people are getting personally offended, they feel personally attacked when they’re kind of called out about certain things. Or when there’s other people existing in the world that have nothing to do with them, you know, they’re just speaking, for instance, at a, the university that this person might be attending. They express feelings of, like, personal hurt, you know, or that they would be traumatized to have that person speak at the university. There’s something of like a personal trauma that they’re experiencing. That sort of thing, I think, wow. That would be a real impediment to allowing people to have the free agency, because if somebody wants to believe, you know, somebody wants to be a Nazi, why is that like a personally offensive thing to you? You may find it offensive, or you may find it logically, you know, like Nazism doesn’t make sense, logically. Or here are the other reasons why I find it distasteful or evil or wrong, that’s fine. You can have an opinion, but why does it offend you that somebody else’s opinion is different, I think is a really interesting question, and I think something that, you know, a sociopath would never experience. You know, so this seems really weird to me, to see it in so many people, it’s almost like this weird malady, like some sort of sickness like plague that’s spreading throughout, psychologically, throughout the community, especially I think in millenials, but not exclusively. You know, there are plenty of old, especially the stereotype old white people, who have similar thoughts. You know, they can’t stand the thought that there’s this gay couple next door that are raising their child with neutral gender pronouns, and have not told the child whether the child is a boy or a girl. There are plenty of people being offended by that, or the anti-vaccine people…you know, people get, want to kind of get angry about these things, and I guess I just don’t understand it. If somebody wants to do something, and it’s not infringing your agency, then why? And they’ll go through a lot of kind of mental leaps to get to the point where they’re saying, no, these people are affecting me. Because, you know, one Nazi’s gonna turn to ten, is gonna turn to 100 and eventually I’m going to end up in a totalitarian society of Nazism, and that will affect me. But I just don’t see it, and I especially don’t see how urgent it is from the one Nazi to you need to punch them. You kind of, what you were saying about “good without god” and these ethical lifestyles, and like this idea of trying to convince people to “don’t be a dick,” you know, with their irrational beliefs, you’re not going to convince people by punching them. It’s like an absolutely ineffective tool that they’re choosing to use, too. You know what, I think it kind of is, and it stems from like my own experience with sociopathy. I essentially think most personality disorders, from what I’ve learned, from what I’ve known, from what I’ve experienced and the people that I’ve met, I think most personality disorders involve an issue with the sense of self, with the sense of identity. Right? I think that’s why they’re called personality disorders, right? Like a narcissist has a false sense of self, there are other ways that people kind of distort their view of self in other personality disorders. For a sociopath, there’s just a very weak sense of self, there’s like, almost, like, you’re not an actual person, you’re just like, like a hologram or like a cypher. You know, an illusion—or you’re like, water or something. You take the shape of whatever vessel that you happen to be in.

Francis: People identify with their programming a lot of the time. Like, they’re born into a culture, they’re born into a sexuality or, you know, like they’re male, they’re female, they’re from this class; there are all these things that are sort of accidental, almost. They were just born into it, and then they identify with that. But how much of that is real in terms of what their potential to be anything is, you know what I mean? And what if, in that regard, sociopathy is more of an awareness that these are things that are almost like accidents that were put upon the human consciousness of this child, and then they identified themselves with it. Whereas, if you don’t grasp on to all that is like, this is me, then you have to figure it out for yourself.

M.E. Yeah, I think that’s definitely true. I wouldn’t say that that’s all that sociopathy is, but i think because they have such a weak sense of self, and there are other problems that are created by the weak sense of self. But I think one of the advantages is that they don’t feel like ego-hurts, for instance. You know, sometimes on the internet I guess we call it butt-hurt; you know, somebody gets so incensed because, you know, they feel like their ego, somebody has been challenging, kind of, their sense of identity, their sense of self. And that’s, is what sets them off so badly. I think we see it in politics a lot, when people have, you know, outrageous reactions, or people have these sort of temper tantrums. This idea that somebody has offended you—for whatever reason, like you said—because, you know, let’s say they’re American, and they identify so strongly with being an American, that whenever anybody says something slightly, what they consider to be unpatriotic, then they kind of flip out, you know, and they’re personally offended by it, because, you know, they’re an American. I think it is true, like a lot of these things are kind of…accidents, and I think the sociopath acknowledges it. I’ve never identified, for instance, strongly actually with my religion, even though I am interested in the religious beliefs, I’m interested in the theology, and the different kind of playing out those ideas—I never think of myself as being like, super-Mormon. I guess. Right? Like if somebody insulted Mormons or were like, “oh yeah, Mormons, polygamy, multiple wives, whatever,” I’d be like, “yeah, maybe this is true” right? (laugh) I wouldn’t take it personally. I have never identified for instance with my gender, either, and I think it’s one of the reasons why I wouldn’t either consider myself bisexual, because it’s not like I think of other people as having these very strict kind of gender roles, either. I don’t think of myself as being “I’m a libertarian” politically, but I don’t think of myself as being libertarian, in fact I used to never vote at all. I used to kind of just be like, these are things that are happening, and there’s not that much I can do, and kind of, who cares? You know, like I never had like these strong kind of beliefs or sense of identity. And these strong beliefs and sense of identity, I think, do harm people in particular ways. There’s kind of a concept…Paul Graham, if you google this you can read his article, it’s a really short article, and he says keep your identity small. And he’s just kind of suggesting there are certain things that you strongly identify with, for instance you love the Steelers, or something, you’re such a huge Steelers fan, if somebody says something bad about the Steelers, it can really incense you. Right, it’s going to trigger you emotionally, you’re going to have a particular kind of emotional, irrational reactions to somebody suggesting, you know, that the Steelers aren’t as good as you think they are, for whatever reason. And we see it in Europe all the time, with the football hooliganism, these soccer teams that hate each other so much that they’re like getting in fights in which people end up dying. And it seems so kind of absurd, but it’s like, if that’s what you identify so strongly with, then it’s not so different than, for instance, nationalism that we see today, where people are willing to kind of like shut off their empathy to, like, huge swaths of people just because they’re not part of their identity, and they want to maintain the identity they do have, which is a certain view of what it means to be Norwegian, or a certain view of what it means to be an American. And they kind of want to freeze in time this 2010 or 2019, whatever their view is, like, this is the perfect way to be Norwegian, we want to go back to the 1940s in terms of what it means to be Norwegian. They identify with these things so strongly that they’re willing to do similar sort of thing where the ends justify the means. They’re willing to infringe on somebody’s agency, they’re willing to punch somebody, they’re willing to, uh, hurt somebody in a way that I find to be…I don’t identify with it at all. Like, I don’t understand what’s going on there psychologically. I think there’s definitely something going on, but I think it’s an interesting concept sociopaths don’t have, other people do have.

Francis: One of the key things we’ve been talking about is, what are the things that are blocking our progress toward a more utopian society, and this idea that people can’t coexist or that people should persuade each other to be more like them, are not really true, and huge impediments…so, I’ve been liking the metaphor of the mosaic, and you know, like, when you talk about neuro-diversity, what’s great about that is you, you’re saying like on a biological level we’re even different. But that’s fine, and people will have strengths in ways that you might not, because they are different. And this idea that we’re all the same, or that we’re all capable of fitting into this monoculture and being happy, is absurd. And once you move past that, and say OK, the reality is we are different from each other and that’s cool…you know, like it kind of opens it up to not be intimidated and appalled by differences in the way that those people that used to pick on us and beat us, you know, want to beat us up, Where they saw us as a challenge to their own identity, as opposed to saying, wow, you know, that guy is onto something really creative, and you know, I would never in a million years feel comfortable walking down the street dressed like that but I’m really happy there’s someone who does, something like that, you know? But for some reason, it was seen instead as a challenge, and that’s I think one of the important things that we have to get rid of. And you know, this idea that we can’t accept certain types of people is problematic. You know, there wil be people who may be, initally at least in a more utopian direction, identify themselves as like, you know, Aryan-ness or whatever. But I think ultimately, once you take away the dangerousness or the anti-social-ness of it, a lot of them might lose interest in being Nazis. Or maybe they won’t, who cares, you know? If somebody wants to go live with their own race, I don’t think anybody has a right to say you can’t do that. And one of the failings of the left, I think, is that they are so black and white about things that when it comes to people having freedom of choice, it just even can’t happen, because there’s all these rules of what’s right and wrong in a politically correct context.

M.E. Yeah, I agree. this is kind of an interesting story, right. So when the book comes out, I quickly get outed as a sociopath by some of my law students. And it gets on this legal gossip blog, and I start getting phone calls, I start getting emails from people who are finding this out. And some of them are really positive and nice, you know, and I still think these people are great people and I really respect them.

Chris: I think anyone who spends the time to actually read your book is going to have a much more positive view of you as a sociopath than somebody who didn’t. I think the book really answers a whole lot of questions, just the average man on the street wouldn’t think, had he not read the book.

M.E. Yeah.I hoped that that was true, and I think it is true for a large proportion of people, but I think there’s still more than I thought, there was still a proportion of people who, for whatever reason—and you know, confirmation bias we know, if you have a previous belief it’s really hard to let go of that belief. Certain types of people are more open than others, certain types of people hold more rigidly to their previously conceived beliefs than others, for various reasons or in various contexts. And so there was kind of a negative reaction. But I had kind of hedged my bets a little bit with this, you know I was a law professor, wanted to stay a law professor. Was hoping I would stay a law professor—the end of the story is, no, I did not stay a law professor, right. So that kind of, uh, clues you in to how people ultimately reacted. But one of the schools I taught at was in San Francisco, I was there as a visiting professor, and I intentionally chose to visit there, because I thought, well in hedging my bets, you know if I have connections with more law schools where they know me personally, there’s a greater chance for instance that this school would end up hiring me even after the book was done, And I thought, San Francisco, liberal San Francisco, right? They’ll for sure be able to look at this and think, hey, you know this person is a…you know, neuro-diversity, we’re pro-diversity and we’re pro- people being able to be the people that they were born as, right? You know, we’re willing to support that, I thought that was maybe going to happen. But they had the worst reaction of everybody…they banned me from the campus, they not only banned me from the campus, they banned me from being a thousand yards from the campus. It was like the entire downtown San Francisco area they allegedly banned me from and said that they would call the police if I, you know, crossed that boundary. Of course that’s not legal. There’s nothing legal or right or rational about it. If you read the email they sent me, it was just such an irrational kind of knee-jerk reaction, which was just kind of like, sociopath=bad for whatever reason.I’m not sure what their sort of reason is, but there was something triggering about the very fact that I was, like, a sociopath that they just found so unpalatable, not just unpalatable, but dangerous, you know, unpredictable, who knows what it was. They went to the extent of violating so many of my civil rights, or at least purporting to violate my civil rights, for just having a published this particular book. It’s a really interesting reaction, and it’s one, I don’t think it’s indicative of all kind of reactions, or like a liberal progressive thing. When I think about—and it is true, you know, politics, like both parties seem very black-and-white thinking, in kind of unpredictable ways for me, I guess that’s why I’m a libertarian is, because I don’t understand how you can have a certain stance on the death penalty and abortion, for instance. (laughs) Which both parties have, which seem to contradict each other, a little bit, right? So these sorts of reactions, you know, the reaction that these people had, like why is it so threatening to them that I’m a sociopath? I don’t have a criminal record, I have no history, past history of violence, and to have that sort of reaction where you’re just like, you know, I can’t even stand to be around you. I said, OK, I’ll just come pick up my things or something. They said no, we will mail you your things. It was that bad. And I was like, OK, well I still need to like grade my finals, and they’re like, you’re gonna need to do that, you know, a thousand yards away from campus, you know, never come back sort of thing. And I….I even said, you know, I feel like you’re having an overreaction, they were like, no, we have thought about this a lot, you know, we’ve considered it. So it’s an interesting thing, in this world in which we allow for neuro-diversity, would that be something that is acceptable to do? They have a fear, and that’s what I kind of understood. You know, they have a—I don’t think I’ll say, legitimate, they had a legitimate fear in the sense that it was an authentic fear. They were feeling fear. They had like an actual sense of fear, but it was an unjustified sense of fear, right? Same with these people with the identity. They have a hurt, you know, they feel personally offended, but is it like a feeling of personal offense that we can kind of get behind and say, OK, you know, we’re going to allow you to feel personally offended and act however you want. Hit a Nazi. It’s fine to have the feeling, I guess, but then the action cannot be to infringe on people’s identity…er, I’m sorry, sense of agency. Even if it is, you know, you have that fear, you have that hurt, you have these various feelings.

Chris: So I was just sort of wondering how you felt neuro-diversity might fit in with diversity in general, because people with disabilities are the only minority who intersects with every other minority, and is also oppressed by every other minority.

M.E. Yeah, that’s a good point. You know, it’s interesting, I didn’t even think about it, but it’s like, even within neuro-diversity, people who say that they’re pro-neuro-diversity, they don’t like sociopaths. You know, it’s fine if you’re austistic—you know, in fact I have a Downs Syndrome uncle, I write a little bit about him in the book, he has violent tendencies that were so severe that he could no longer stay with my aunt, who was his caretaker. He had to go to an institution basically. And do I think that people with Deans Syndrome are all violent? No. But do I think that violence manifests itself in a way that’s kind of different to him, you know, with Downs Syndrome? Yeah, because we can’t call the cops, we can’t throw him in prison, we don’t have, kind of, these avenues that we have with people who, who don’t kind of have disabilities. I just think it’s a really interesting way to treat people. Like there are true differences in the way that we treat people—I think it’s great that we’re not throwing him in prison, but I think it’s also weird that we throw people in prison who are sociopaths, then, right? Because they also have unique kind of issues; they don’t respond well to punishment, just like my Downs Syndrome uncle. I think it’s so great that people have become so, kind of open minded about children with autism now, right? I think it’s great for the children, I think it’s great because it was like, kind of a kicking a wall, you know, to try to force these kids to behave like normal kids, and for what reason? You know, who is it benefitting? To try to get them to kind of conform to these, like, ideas of what a good kid should be in a educational setting, 32.5 kids per class or whatever, and they’re all sitting there with their workbooks. You know it’s such, so weird how we’ve gotten rigid about these ideas, but it…yeah, I find it…hmm…a little appalling that the same people who promote neuro-diversity in one area, and seem to be so, kind of, accepting and willing to accept, are the same people who are banning me from campus, a thousand yards from being on campus. It’s such an overreaction to me.

Francis: That kind of awareness issue, don’t you think?

M.E. If I thought—and this is kind of a little bit of jaded—part of my legal research was actually in information sharing, and misinformation, right? And what we can do about, kind of, free speech issues, you know, like what does the law have to do with free speech and do we really think that there’s a marketplace, a functioning marketplace of ideas, such that good ideas survive and bad ideas eventually kind of die out. And I think, you know, in the past five to seven, ten years, we’ve seen a lot of social behavior that suggests that our previous conceptions that people are really looking for truth and for facts—they’re not completely false, you know, like I think people still prefer the truth generally speaking, but there are situations in which they don’t. So it’s a misinformation thing, but I think there’s something else underlying their…reluctance to give the same kind of neuro-diversity privileges to other people. I’m really interested, I don’t have like an answer on it, but I think it is an issue when I was so interested in this concept of trying to make a better society and kind of, what would it look like? Of a utopian society and, uh, kind of the science behind it. Are there impediments to that—scientific, psychological, resource, economics, to having this kind of utopian society, and I immediately started thinking of these, because it’s been something that I have grown up all my life seeing, kind of the back side, the hypocrisy, of empathy. You know, the hypocrisy of compassion, right? The hypocrisy of these, you know, like these good feelings, kind of tribalism, you know, the negatives. Even before other people saw it, I saw it. Mob mentality has been, like, my constant fear, you know the idea that—and we know that it happens, you know, it happens to normal people, where they all just happen to gang up.

Francis: Well, I consider myself highly empathetic, and I would consider mob mentality one particular expression of empathy. I don’t think you’d have mob otherwise.

M.E. Yeah. (laugh) I agree, but I want to hear your ideas on it. Like go ahead and explain it.

Francis: Well, the thing is, empathy connects people. It makes you feel their feelings, but empathy is also sort of like an on-off switch. And if you can identify an idea of different people as “other,” then you can feel empathy towards your tribe and then no empathy at all, if anything like the opposite, towards them. You know, which is why in every war, they try to like de-humanize them, make the enemy seem like they’re really different on some fundamental level. And, you know, what they’ll do is they’ll whip up the empathy like they did in the lead up to the Iraq wars, where, you know just saturated you with the 9-11 stuff and the personal stories of how these horrible terrorists who like, they hate freedom, they want to do this, they attacked our homeland—and it’s like an empathetic response towards the people that were suffering on such a intense degree, that raises that mob mentality, where they “yeah, let’s kill ‘em!” You know? But I think if you didn’t have empathy at all, that wouldn’t be possible, because you’d be like, rational, and you’d be like, um…you know, you’re not subject to that same level of mob and tribalness.

M.E. Yeah. You know, I’ll take it one step further, I think it’s like not just empathy maybe, although I think it has a role, and say it’s also this, this kind of sense of identity. Because you think, would a mercenary fight as well as somebody who’s a true believer? You know, we see true believers doing outrageous things, you know, horrible things, 9-11, for instance. Right? Because they identify so strongly, they think this is like the absolute truth, and I’m going to impose this truth on everybody. You know, the infidels, the non-believers, they have to hear this message and the best way to reach them, I guess, is to kill them—that’s kind of the attitude. We used to think of that attitude as being associated mostly with extremists and terrorists, but now we’re like out there punching Nazis, right? I think Antifa is one of the most terroristic organizations that we have operating in the United States. But I don’t think it’s condemned often enough, to like go out there are commit violence against people who have opposing viewpoints to yours?

Chris: I don’t think Antifa has enough power or actual organizational capabilities to be a real threat.

M.E. I hope that’s true. (laugh)

Chris: I think they’re a bunch of kids who show up looking for a fight, and…we had that back in the 60s with the counter-revolutionaries showing up to start violence and…you know, I think they’re just troublemakers.

M.E. Hmm. I hope.

Francis: As Joe Strummer once said, they’re kind of Stalinist, maybe.

M.E. Stalinists in what way?

Francis: In the sense that, like, they, they act like they’re for the people and for like these higher values, but they’ll like squash you if you think differently.

M.E. Right, exactly. You know, I have a little brother who is solidly hipster Millennial, and I kind of use him as, you know, keep my fingers on the pulse (laugh) of that type of generational, like…he’s socialist, you know, he loves Bernie Sanders, the…kind of, this type of thinking, right? Which I don’t think is necessarily bad, it’s just like there’s a particular type of group of people that I think are associated with those kind of beliefs. And I think they’re sympathetic to Antifa, I think these younger people are kind of like…you know, I keep repeating it, but it’s like the “ends justify the means,” this concept of, we’re in this fight, good and evil, and we are on the side of good, and the people who disagree with us are necessarily evil.

Chris: I don’t trust Bernie Sanders, ‘cause he reminds me too much of myself. He’s an old man with messy hair who talks about things he can’t deliver.

M.E. Um-hm. (laughs)

Francis: I think the, the reason why people do find some sort of inspiration in Antifa is that, you know, there’s been this like, mechanism that’s been going on for a long time where the right wing are like street fighters, like they just want to win, they don’t give a fuck how they’ll go about it, if they win, they win. The left is like, you know we have to be democratic, reach across the aisle, and provide like persuasive arguments and data, and all this stuff, and when it gets down to the fight, they get their asses kicked, because, I mean, they’re bound in a way that people who are like street fighters are not bound. It seems like there are people who are willing to have the strength of their convictions enough to really fight for them. I’m not saying that that’s a good thing necessarily in this situation, but I also think there is a frustration amongst people on the left that, they’re just getting steamrolled.

M.E. Yeah. I see that, I see kind of why they feel that way, you know I don’t think it’s completely irrational, but I, I guess I disagree with that kind of attitude that they’re gettting steamrolled. They kind of, and this is..it’s, again, I think it’s this idea of identity and maybe identity politics, now that we’re talking about politics, is that any kind of—if you have such a strong belief that, you know, the people that oppose you, you know, it’s almost like a zero-sum game. You know, any gains that the right makes, you know, makes the left feel like they’re getting steamrolled, and I’m thinking, you know, not really. You know, what happened to, like, compromises and concessions, like most people would think of that as being pretty weak now. And they’re so sick of the compromises and concessions, that they think that they’re constantly getting ripped off. Whereas the left, they’ve had control of the Supreme Court basically since the Warren Court, right? Roe v. Wade and they’ve maintained that for decades and decades, right? Now they’re upset at the thought of that ending. They don’t kind of see that they’ve had their way, essentially, for the past, what is it, like four or five decades, starting with Brown vs. Board of Education. I think, controversially, and you can, controversially I say, but you can find Ginsberg also saying similar sorts of things about some of these cases, these big cases that even, you know, people who haven’t gone to law school know about. Some of the ways that the Supreme Court kind of justified—they’re not good methods. They broke particular judicial norms, you know, which is something we keep hearing about, you know, we’re breaking these norms, and it’s again kind of like the ends justify the means, right? Like we’ll be a little bit underhanded about the way that we go about kind of getting this result, because we’re so in…we’re so interested in the result. And I don’t think that that can happen ever. My kind of—and this is kind of a Stoic philosophy, too—is that it always has to be process-oriented, it can’t be result-oriented. Because if it’s result-oriented, then it really does turn into this kind of like zero-sum game. You can’t reach across the aisle if you want something and they want something else, you know, it always has to be a compromise. There can’t ever be, like, well let’s think about ways that we can like engage in this process and get someplace that, you know, neither one of us can even see right now. But it’s gonna be ultimately a better place than the current vision that we both have. Does that kind of make sense?

Francis: Yeah. And you know it’s a little bit dualistic to, to me in my mind, like we’re…we’re trying to come up with the uptopia where everybody gets to decide who they are, everybody respects everybody’s personal decision on who they are, and we have a society that it gives them the resources to actualize that life in a really exciting, fulfilling way. Or, you have this idea that, you know, it’s my way or your way, and it can’t be both ways. And actually, in reality, it can be both ways a lot of the time. And that’s why our system is failing I think.

M.E. Yeah. This idea of, just, coexisting seems to be kind of getting pushed out the window, there’s—I don’t know why, is it social media? We’re like, we can’t stand, you know, the cousin on Facebook or whatever who has different political beliefs than us, or something, and so people are just getting so incensed about it. They’re getting outraged, you know, we’re being steamrolled, where all it is is just (laugh) it’s just kind of like the natural shifts of power. I guess I just don’t see where anybody’s getting steamrolled, but both parties think that they are, because in the zero sum game, any time that the left wins, the right loses. Any time the right wins, the left loses. That’s kind of the attitude that people have. There’s no longer this idea of, like, hey we have a Constitutional Republic democracy, and this is the process, and if you didn’t like the result this time, then we have particular institutions that have been around for centuries, you know, that you can use and work within those institutions to enact the change. Now people are talking about, uou know, again, my little brother, who I think is a good indicator, kind of barometer of the way people are thinking, has just like, he thinks that the government is essentially useless Constitution garbage, you know, because it doesn’t lead to the result he wants. And that’s kind of, the mentality you have seen increase everywhere, where people are thinking constantly about the ends, they’re no longer thinking about means that we have been talking about. You know, how can you live a meaningful life, how can you live a life of fulfillment and purpose, and not have it being impeded by others. You know, like not have other people be offended by the fact that…

Francis: ..Or anyone’s business…

M.E. Yeah. It’s not anyone’s business. But people have made it their business, it’s gotten so popular, I think, in the past decade, to make everybody’s business your business. And I just don’t understand why.

Chris: Well, and we also live in a world where privacy no longer exists, I mean, you tried to publish a book anonymously, and how quickly were you discovered?

M.E. 24 hours (laugh), 48 hours…yeah. Definitely that same week.

Francis: Maybe what the college was angry about was that you didn’t abide by the “don’t ask, don’t tell” rule.

M.E. Mmm. Do you think there is, like, still “don’t ask, don’t tell” for that sort of stuff?

Francis: Well, there’s so little awareness about it, first of all. I don’t think the true nature of the heterogeneity of human minds, of psychology, has been brought to the attention of people in general. When you’re talking about people maybe that don’t have a lot of empathy, and you know there’s always a whole spectrum to everything. My yoga teacher recently, and if you’re listening, my apologies if I’m bringing you up without asking, but, she was talking about how she works with people who are dying. She helps them with the death process, she calls it a “death dula.” And I said to her, you know, “how are you on the empathy scale? Do you have a lot of empathy, or not too much?” And she said, “ah, I don’t really have much empathy.” I said, “I guess you’re probably libertarian, right?” And she said “yup.” (laugh) But here is a woman who does, like she’s such a, like, angel, kind of, you know? Like she’s doing this job that I could not do, because I have so much empathy, I would make these people feel horrible about the fact they’re dying. She can be emotionally detached and actually do the job properly, you know. You’re like a trailblazer that you even wrote this book and got people thinking about this stuff and opened the conversation, but you just opened it as far as I can see. And it definitely, in my mind, impacts a lot on the idea of neuro-diversity and that sort of stuff, but I also feel like it’s also really, it’s beginning. You know, it’s like a nascent kind direction that we’re going in, a movement where we’re going to find that different people have different aspects of personality that we have pathologicized. You know, we take like extreme examples of it sometimes, and turn it into a disorder. But you know, it’s a spectrum. You know, like I think people don’t understand, you know like you have all these primary colors that we’re figuring out now psychologically, but you know like people don’t, I don’t think, really understand what all that means, and what’s there. And honestly, even people like me who’ve been studying this stuff and trying to like really wrap my brain around and figure out how to use this to make society better, I feel like it’s still very new, to me, the cutting edge of human consciousness in some ways.

M.E. I hope. You know, I hope that this is, this is happening. I think people are becoming more aware. I think that, again, kind of—this has been the theme of at least my comments—the impediment I can kind of see to people being more open about the possibility that other people are living equally valid lives, even though they’re making very different choices, is this concept of identity. Can you believe, for instance, can I believe that my religion has truth to it without necessarily thinking that other people’s beliefs are wrong? Can I think that one way is a good way without thinking that other peoples’ ways are bad? I think there’s kind of like a, just like a…a broadening of your perspective that needs to happen a little bit more before we’ll get to that place where society is really capable of having that degree of acceptance for neuro-diversity. It reminds me, I follow on Instagram, I think it’s called “medicalpedia” and it has a…I don’t know why I follow it, it has like all these gruesome, you know, photos. Somebody, you know, got their foot cut off in a motorcycle accident, right? And it will show–I think it’s for medical students, but it’s really interesting, I think to learn so much about the human body and the way that it’s connected by seeing it in these kind of distressed ways. And I recently saw a picture of, there was a homeless man who got brought into the hospital, and he was complaining of having a lack of sensation in his feet, and he’s also diabetic, right? And they’re taking off this sock, and underneath the sock—yeah, you can look this up—are hundreds of maggots, and I was like “what!” Like as they’re peeling away this sock, there’s just these maggots, like falling off the sock, I mean literally, like his foot is covered with maggots. And you start seeing the maggots start coming out from under the skin, you know, and you’re like, how can somebody let it get this far? I’m sorry I’m not able to attribute the source on this one, I recently read this article, this comment about this guy talking about homeless people, and his comment was basically that if you don’t understand other people’s choices, right, if somebody’s choice seems so kind of stupid and wrong to you, then you really don’t understand the full background of their situation, their environment, why they’re making that particular choice. Like if we could get to the place where, instead of just decrying everybody’s choices as being wrong, where we would try to understand why is it that you’re making this choice? You know, why did you let it get this far? What are the kind of influences that lead you to believe this politically, you know, why is it that you have this particular emotional reaction to things? If we really tried to kind of understand why, then I think we would, we’d be able to understand better how everybody has good in their choices, you know, everybody is kind of doing the best that they can, and that there really is an ethical, kind of ultimate good in allowing people to have the autonomy to live their own lives, make their choices, that there’s value in that and we don’t have to have everybody look and act and be the same.

Chris: I think that’s one of the reasons why I don’t really like labels like “left” or “right” or Democrat-Republican, blue-red, or whatever; because I think it creates a false dichotomy, and I think people have embraced that false dichotomy. I mean, I think we probably, as Americans, you know, Trump supporters and Clinton supporters and Sanders supporters, probably agree on 90% of stuff, if you asked them as individuals. And yet we join these teams, or become members of these tribes, and all of a sudden, as you were referring to earlier, it becomes a black and white, win or lose, situation.

M.E. Heh. Yeah, and it is kind of weird. Let’s say I like the Steelers so much, let’s say I like a particular member on the Steelers. In two years he’s going to be playing for, like, the Chargers or something. You know, it’s so weird that we’re kind of this way, we’re so rigid, in thinking that this moment in time is going to stay this way. You know, the Republican party is always going to be like this, the Democrat party is always going to be like this. Whenever there’s a new movement, right, we’re like, whoaoh! You know, this is shaking things up so much, and how are we going to kind of like live—I think it really just illustrates the falseness of the labeling. Right? That there isn’t so much a dichotomy as we thought before, even if there are things that are dichotomous, we’re not necessarily perceiving them correctly, they’re going to shift, possibly, and having more of this openness to the way that we understand labels. I went to Cambodia—when did I go? like 2010 or something—and at the time I had travelled kind a lot of places, but still, out of all the places I’ve travelled, it seems like the people were…and you know, I understand, I was only there for a few days, it’s hard to get an impression, but my impression of them was that they were very happy.. They were very happy, very kind of, um…content. And it was also one of the poorest places I have ever been. You know, so poor that I realized by the second day, when I saw like a little piece of plastic wrapper out in kind of these rice patty villages, I was so surprised to see it, and I was like, why am I so surprised to see this candy wrapper? And it’s because I hadn’t seen any plastic waste or any waste at all in the past, you know, 24 hours, right? So super, super poor, even compared to other SouthEast Asian countries. I kind of wondered why they’re so poor, and I thought maybe it has to do with, you know, Pol Pot and the killing fields and they had that civil war for so long that they were just sick of the violence, they had, you know, a similar sort of situation, one people trying to impose on the other types, and the types are fighting and they both think that they’re justified, and they’re willing to, you know, use violence to impose their beliefs and their worldview on other people, that can’t stand the fact that there are other people different from them. You know, very similar kind of story that seems to be behind so much conflict in the world. But after decades of this, and I think 20% of the Cambodian population was killed? And still now it has one of the highest amputee rates, right? Because they land-mined the entire country, basically. In fact, like you can still…I forget what the rates are, it’s like a thousand people a year die from land mines, and then many more end up maimed or otherwise hurt. There are like, you know, certain places are just unoccupied because of land mines, there’s like 10,000 landmines or unexploded ordnances. But they’re just sick of it, right? And so maybe that will happen to us, too. Maybe we’ll be like, you know, shaming each other, playing the zero-sum political game, until like we get to the point where we realize from personal experience, you know, decades later, that it was a terrible way to govern a country, it was a terrible way to relate to our fellow humans, and that there was just such waste. You know, human, human waste or otherwise, that we’ll just learn. You know, we’ll learn even though we have the capacity to do this particular thing, we’re not going to do. Similar thing with World War I, World War II.

Francis: That’s such a great thought. Before we end, I would like you, if you can, to maybe talk a little bit about what you’re working on now. You’re researching a book, right?

M.E. So I, I’ve started writing a second book, probably in the last year or so. The reason why I started thinking about it is, you know, I’ve learned a lot since the last book was published, I guess 2013, so almost six years ago, in May. Since then, I’ve gone to a therapist, I’ve learned some other things, and so I kind of thought, you know, I have…I have some more things to share. And so I wanted to share it with people, and there was this…young man, he’s in his mid-20s in Australia, and he wanted to join the Australian version of like, essentially the Seals, their Navy special forces. But he wasn’t allowed into the military, because during his college era, somehow he had gotten the diagnosis. He was seeing a therapist. The therapist never told him what the diagnosis was, but it was in his medical records, and so when went to go to apply to join the Navy, they saw that and said, you know, you’re not getting in. And he was like, “M.E., now what? Now that I have this diagnosis, I just found about it last month, and it’s ruined my dreams of what I wanted to do as a career”…You know, he was like a semi-professional swimmer, so it was definitely what he wanted to do. And I thought, this is the perfect question that I feel like I can kind of help answer. And I think I had a little bit of hubris, too, when I started, I’ll go down and meet him, and I’ll try to meet some other, kind of, sociopathic minded people, and try to kind of start sharing my ideas about things. And I..I still do think I have like, some insight that does help them, and I do try to help them. I’ve been, now, I don’t know, like ten different countries or something, I’ve been traveling the world this past year, meeting these people and other people who aren’t necessarily sociopathic minded but are interested, or have similar experiences, you know, are interested in neuro-diversity, or have friends, relatives, who are sociopathic minded. And I’ve since learned a lot, and I guess I’ve learned even more than, you know, thinking I have the right answers. And I guess talking to you today, this may explain a little bit of the background of why I think it’s so important to hold your beliefs lightly, gently, and keep and open mind, is that everybody has such different experiences. You know, the British sociopaths are different than the French ones, are different than the Russian ones, are different than the Australian ones…and it’s not even just their nationality, obviously, that’s different. It’s just everybody is so different, and things manifest themselves in such different ways for different people, and the answer that’s right for me is not necessarily going to be the answer, or the solution, for somebody else. So I guess even within the sociopathic community, I’ve learned to understand and respect that a little bit better, and understand that everybody essentially has to just make their own way, and that there are certain things that are more true than others, or will tend to be true is more situations than others, but I think when you take anything to an extreme, you’re gonna end up in a bad place. That’s true of virtues, they can become vices, and I think that’s even true of “truth,” that you can distort truth by taking it to an extreme. So, I guess I just think more in terms of everybody is different, there’s such a huge diversity of people, even amongst the sociopathic community, things that work for us, things that seem right for us, maybe aren’t going to be necessarily so for other people, and to just kind of be OK with that.

Chris: And with that said, what makes you optimistic about the future? I mean, I don’t think you’d be out there writing books and doing the activism you are if you’re optimistic at least at some level.

M.E. Mmh. That’s so true. It is so true, I really am optimistic about the future. I think one thing that makes me optimistic is that I, myself, have changed, you know. I don’t think I’ve ever been really kind of rigid about beliefs, even though I write the book, you know, the first book is just kind of a snapshot of what life was like, and a lot of it was, by the time I even was writing it, it wasn’t stuff that I was necessarily, you know, still engaging it. I guess (laughs) some of the more childish things, you know, or some of the more immature things, by the time I wrote it, they were mostly in my past. And things have changed similarly, like with therapy, other things that I’ve been doing, you know, relationships that I’ve had with other people—I just see things as being, like, this journey of change. I have seen change in my own life, I’ve gotten to happier places in my own life, and so I…just, I guess I believe in the power of change? Self-willed change, and I think I believe in the, I guess, the divinity of the human condition. You know, I believe in the, the potential that humans have to do really great, good things, and to have very deep, meaningful, lives, live lives of meaning, and to understand that they have quite a bit of power. Everybody has power. Sociopaths are more aware and thinking about power every day, but everybody has power to understand, you have the power to do of great good, and great evil. Which is why, you know, even if you’re just kind of shooting off a tweet, or shooting off a Facebook post, to kind of be more considerate and aware of the sorts of effects that that may have on people, to be more considerate and aware of the way your actions affect others, and this kind of getting to this concept of treating everybody as being equally, I guess, sacred. You know, and not interfering with their own ability to change and to…make their choices freely and uninhibited by…your own issues and hangups and your own feelings about what they’re getting up to.

Chris: Well, thank you, M.E. Thomas, for joining us on Making Better.

M.E. Thanks so much, Chris.

Francis: Yeah, Thank you very much.

M.E. Thank you, Francis.

—end