new-year’s-resolutions-designers-are-setting-for-2020

At the very end of last year, we asked our design community on Instagram what goals they wanted to reach most in 2020. As you can imagine, responses were scattered across the board, but there were a few popular new year’s resolutions that we thought were worth sharing. Check out what your design peers are looking to accomplish next year and get inspired to reach for the stars in 2020.

  1. Motivation

  2. Keep Going Poster

Row 1:
Jennifer Hood for Hoodzpah,
伍陆柒 for innn,
Zellene Guanlao.

Get into Product Design

Whether it be starting a career in Product Design or transitioning to a different design discipline, our community is eager to dip their toes into what our Global Design Survey identified as the most coveted type of designer companies are currently looking to hire:

” I want to be a Product Designer and want to start [my] own business.

“Complete the UI/UX designing course, grab a job, aim for Product Manager?❤️

“Transition from Graphic Design to UI Design.

Instill good self-care habits

Every creative practitioner can benefit from setting aside time to take care of themselves outside of work. As you can see, many designers are hoping to prioritize healthy lifestyles that include self-care in 2020, so in turn, they can create better work.

“For 2020 I want to get my daily routine back in order, wake up early, meditate, eat fresh & healthy, and learn how to divide my time between work, personal development, and family life. And maybe, just maybe… I’ll finally finish designing my portfolio website. ?

“Practice and prioritize true rest, to both fuel creativity and be a healthier human being.

“Stay happy and healthy in spirit and body with family and friends?. Work creatively and effectively. ?

Create something every day

You’ve probably heard time and time again that the most effective way to develop your design skills is to create regularly and consistently—whether it be an 100-day or a 30-day design challenge, (or even Dribbble’s Weekly Warm-Up!), designers want to challenge themselves to get out of their comfort zone and keep creating in 2020:

“To create something every day and enjoy it in the process. ?

“To increase my knowledge and work every day on posters and graphics to be more good and expressive in it, so that I can improve my skills and design eye.

“I want to create a piece of graphic design work every day ??

Land a full-time design job

One of the more popular new year’s resolutions seemed to be to land a great design job. Whether fresh out of school, or transitioning into a new role, designers are ready for their next big move, and we support it!

“Get a job. Pretty self-explanatory, have to pay rent, bills, for food, etc.

“Get a job! I’m a fresh graphic design graduate.

“To get a full-time job (just graduated) and to teach myself After Effects. ?

“Get a full-time UI design job. That’s the first step.

You know that feeling when you create something that you don’t absolutely love or don’t think is worth sharing? Well, designers want to get over that impostor syndrome in 2020 and keep making efforts to share their work more often. Please do! We’d really like to see it.

“To not be so hung up on being perfect, and commit to completing work.

“Show my work to the world. So much work was done but nothing uploaded on Dribbble ??‍♂️

“To launch my first comic book. It’s been a dream of mine for so long and I’m ready to share some magic, art and a cool story with the world!

Expand on new & existing skill sets

Of course, with a new year comes an itch to learn new skills or even develop existing ones. Designers expressed interest in boosting their skills in all kinds of different areas in 2020, especially illustration! Check it out:

“I’d love to expand my client base and oddly enough, do more food-related illustration and design. Food is my second favorite way to create and it’s so fun to illustrate!

“Learn and become proficient at Procreate ?? been attached to the pen tool forever.

“Buy a Wacom board and start digital illustrations. ?

That’s a wrap! We hope these designers’ new year’s resolutions inspire you to think about your own goals for 2020. Rest assured, whatever you want to accomplish next year, you have a huge community of supporters rooting you on. Let’s make this year a great one!


Find more Community stories on our blog Courtside.
Have a suggestion? Contact stories@dribbble.com.

10-lessons-from-10-years-in-marketing-and-10-in-sales

In 1998, my software developer brother said: “you’ve gotta get into software – the internet is changing everything!” Fast forward to 1999, after passing a couple of interviews and slogging through some C books, I packed up my life, took a 50 percent pay cut, and headed to Denver to work as an inside sales rep for my first job in the software industry. 

Today, exactly 20 years later, I lead a product marketing team at Salesforce. How did I get here? In summary: I spent 10 years in sales. Then, in August 2009, I walked into our HQ in San Francisco for my first day as a marketer and I have been in marketing ever since.

After 10 years in sales and 10 years in marketing, here are the 10 biggest lessons my career has taught me:

1. The best marketers never stop listening to customers.

In my experience, the best marketers I know are the ones who are always listening to customers. They’re passionate about delivering innovative and personalized experiences to their audience in the language of their customers. This is marketing at its best: simple and approachable messaging that your buyer understands.

2. The best salespeople never compete.

The biggest difference between the best salespeople and everyone else is how they handle competitive opportunities. I once interviewed 17 salespeople who had gone to 100 percent club three years in a row. My hypothesis was: if you went to club three years in a row, you weren’t just good — you were the best. We wanted to learn what they were doing that was so different, and use that in our marketing. One of the questions asked was “When you’re in a competitive deal, how do you deal with that? What is your approach?

The best salespeople never got into a “feature comparison.” They focused on the customer, spent time listening and deeply understood the customer’s business. If you’re pulling up your feature comparison doc thinking you’re going to win a deal, you’ve already lost, according to these best-in-class sellers. 

3. What sales wants, marketing wants, too.

Both sales and marketing teams want the same things. They just have different ways of saying it. Sales talks about opportunities and pipeline. Marketing talks about leads and the funnel. At their core, those things are the same. My advice to marketers: learn the ‘love language’ of sales.

4. Never underestimate the power of a customer story.

Marketers. Salespeople. Customers. Prospects. At the end of the day, we’re all human — and there’s nothing more human than a story. A customer story shares a real-life experience that someone has had with your brand. It makes people who aren’t yet customers say: “hey – I can see myself in that story. We can do that too!”

5. Everything is about budget and resources.

Marketing effectively comes down to money and resources. You can’t support events, demand gen, website design and everything else marketers do without budget and people. 

Sellers reading this, understand that marketing can’t do their jobs without money and people, and have empathy for it. When I was a seller and marketing told me “we don’t have the budget” it always made me grumpy. I later learned how much that was not their fault, and that I could have done more to help.

6. Marketers should think about products the way salespeople do.

In my experience, most marketers are not nearly as well-versed in the products as they are in marketing. What I learned in sales was that the best salespeople understood what they were selling. Deeply. When I first started in marketing, I took this seriously and was a better marketer for knowing our products as well as anyone in sales.

7. Salespeople should understand different marketing roles.

We have brand marketing, public relations, demand generation, product marketing, marketing operations, and more. Each of these marketing functions is it’s own specialization and often takes years to become an expert in. The best salespeople understand how to work with each marketing function and take advantage of the skills and different capabilities when needed.

8. Marketers should spend a day in salespeople’s shoes.

Cold calling leads, meeting prospective customers at events – it is hard. Every salesperson you meet is under a lot of pressure to meet their quota and close deals. 

Marketers, recognize whatever you’re working on is probably not as important to sales as the deal they are working hard to close – give them space. Spend a day shadowing a salesperson on your team. You’ll have an entirely new perspective for how hard their job is, and you may come away with a few things you can do as a marketer to help them. 

9. Salespeople should spend a day in marketers’ shoes.

When I was in sales I thought marketing was for people who couldn’t sell. Nothing could be further from the truth. Creating content, managing customer relationships, building campaigns, measuring leads and pipe-gen – all of this can be taxing. And extraordinary marketing is more science than art, with persona research, market research, audience segmentation and more. 

The best salespeople understand that marketing is working hard on sales’ behalf. If you’re in sales, get to know your marketing peers. Remember whatever you can do to support them will ultimately be supporting you too.

10. Great marketing can set the pace for a whole company – but you need great sales, too.

An entire company can rally around a catchy tagline, fun mascots, or a cool ‘out-of-home’ campaign. A well-planned marketing program can create momentum that sales can benefit from for years to come. But without a sales team to work the leads that result from the increased demand – what good was the campaign? Recognize that to win, you need both teams operating as one.

Which path should you take – marketing or sales?

In my 20-year career in marketing and sales, I’ve learned that both teams work best when they’re aligned. At the end of the day, we’re all working toward common goals. Here’s to the future – a future where marketing and sales understand each other better than ever!


Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.



About The Author

With more than 20 years of experience in driving marketing programs for B2B companies, Nate Skinner is the vice president of product marketing for Marketing Cloud at Salesforce, leading the company’s B2B marketing automation marketing strategy and execution for Pardot. Prior to Salesforce, Nate was the chief customer officer at Campaign Monitor and vice president of customer marketing, where he was responsible for all customer-facing programs for the email marketing company. Nate was at Salesforce for nearly six years, first overseeing competitive intelligence and later enterprise marketing and executive programs before he joined Amazon Web Services. An Arizona State University graduate, Nate has also completed the Stanford Graduate School Executive Program and taken courses at MIT. Nate currently lives in Atlanta, loves all things American history and spending time with his three kids.



50-years-ago,-i-helped-invent-the-internet.-how-did-it-go-so-wrong

When I was a young scientist working on the fledgling creation that came to be known as the internet, the ethos that defined the culture we were building was characterized by words such as ethical, open, trusted, free, shared. None of us knew where our research would lead, but these words and principles were our beacon.

We did not anticipate that the dark side of the internet would emerge with such ferocity. Or that we would feel an urgent need to fix it.

How did we get from there to here?

While studying for my doctorate at MIT in the early 1960s, I recognized the need to create a mathematical theory of networks that would allow disparate computers to communicate. Later that decade, the Advanced Research Projects Agency — a research funding arm of the Department of Defense created in response to Sputnik — determined they needed a network based on my theory so that their computer research centers could share work remotely.

My UCLA computer lab was selected to be the first node of this network. Fifty years ago — on Oct. 29, 1969 — a simple “Lo” became the first internet message, from UCLA to Stanford Research Institute. We had typed the first two letters of “login” when the network crashed.

This quiet little moment of transmission over that two-computer communication network is regarded as the founding moment of the internet.

During its first 25 years, the internet grew dramatically and organically with the user community seeming to follow the same positive principles the scientists did. We scientists sought neither patents nor private ownership of this networking technology. We were nerds in our element, busily answering the challenge to create new technology that would benefit the world.

Around 1994, the internet began to change quickly as dot-coms came online, the network channels escalated to gigabit speeds and the World Wide Web became a common household presence. That same year, Amazon was founded and Netscape, the first commercial web browser, was released.

And on April 12, 1994, a “small” moment with enormous meaning occurred: The transmission of the first widely circulated spam email message, a brazen advertisement. The collective response of our science community was “How dare they?” Our miraculous creation, a “research” network capable of boundless computing magnificence had been hijacked to sell … detergent?

By 1995, the internet had 50 million users worldwide. The commercial world had recognized something we had not foreseen: The internet could be used as a powerful shopping machine, a gossip chamber, an entertainment channel and a social club. The internet had suddenly become a money-making machine.

With the profit motive taking over the internet, the very nature of innovation changed. Averting risk dominated the direction of technical progress. We no longer pursued “moonshots.” Instead advancement came via baby steps — “design me a 5% faster Bluetooth connection” as opposed to “build me an internet.” An online community that had once been convivial transformed into one of competition, antagonism and extremism.

And then as the millennium ended, our revolution took a more disturbing turn that we continue to grapple with today.

By suddenly providing the power for anyone to immediately reach millions of people inexpensively and anonymously, we had inadvertently also created the perfect formula for the “dark” side to spread like a virus all over the world. Today more than 50% of email is spam, but far more troubling issues have emerged — including denial of service attacks that can immobilize critical financial institutions and malicious botnets that can cripple essential infrastructure sectors.

Other dangerous players, such as nation-states, started coming onto the scene around 2010, when Stuxnet malware appeared. Organized crime recognized the internet could be used for international money laundering, and extremists found the internet to be a convenient megaphone for their radical views. Artificial intelligence, machine learning, facial recognition, biometrics and other advanced technologies could be used by governments to weaken democratic institutions.

The balkanization of the internet is now conceivable as firewalls spring up around national networks.

We could try to push the internet back toward its ethical roots. However, it would be a complex challenge requiring a joint effort by interested parties — which means pretty much everyone.

We should pressure government officials and entities to more zealously monitor and adjudicate such internet abuses as cyberattacks, data breaches and piracy. Governments also should provide a forum to bring interested parties together to problem-solve.

Citizen-users need to hold websites more accountable. When was the last time a website asked what privacy policy you would like applied to you? My guess is never. You should be able to clearly articulate your preferred privacy policy and reject websites that don’t meet your standards. This means websites should provide a privacy policy customized to you, something they should be able to do since they already customize the ads you see. Websites should also be required to take responsibility for any violations and abuses of privacy that result from their services.

Scientists need to create more advanced methods of encryption to protect individual privacy by preventing perpetrators from using stolen databases. We are working on technologies that would hide the origin and destination of data moving around the network, thereby diminishing the value of captured network traffic. Blockchain, the technology that underpins bitcoin and other digital currencies, also offers the promise of irrefutable, indisputable data ledgers.

If we work together to make these changes happen, it might be possible to return to the internet I knew.

Leonard Kleinrock is distinguished professor of computer science at the UCLA Samueli School of Engineering.

fifty-years-ago,-the-internet-was-born-in-room-3420

When I visited UCLA’s Boelter Hall last Wednesday, I took the stairs to the third floor, looking for Room 3420. And then I walked right by it. From the hallway, it’s a pretty unassuming place.

But something monumental happened there 50 years ago today. A graduate student named Charley Kline sat at an ITT Teletype terminal and sent the first digital data transmission to Bill Duvall, a scientist who was sitting at another computer at the Stanford Research Institute (now known as SRI International) on the other side of California. It was the beginning of ARPANET, the small network of academic computers that was the precursor to the internet.

At the time, this brief act of data transfer wasn’t anything like a shot heard round the world. Even Kline and Duvall didn’t appreciate the full significance of what they’d accomplished: “I don’t remember anything specifically memorable about that night, and I certainly didn’t realize that what we had done was anything special at the time,” says Kline. But their communications link was proof of the feasibility of the concepts that eventually enabled the distribution of virtually all the world’s information to anybody with a computer.

Today, everything from our smartphones to our garage door openers are nodes on the network that descended from the one Kline and Duvall tested that day. How they and others established the original rules for shuttling bytes around the world is a tale worth sharing—especially when they tell it themselves.

“That better never happen again”

Even back in 1969, many people had helped set the stage for Kline and Duvall’s breakthrough on the night of October 29–including UCLA professor Leonard Kleinrock, whom I spoke with along with Kline and Duvall as the 50th anniversary approached. Kleinrock, who is still at UCLA today, told me that ARPANET was, in a sense, a child of the Cold War. When the Soviet Union’s Sputnik 1 satellite blinked across U.S. skies in October 1957, it sent shockwaves through both the scientific community and political establishment.

Room 3420, restored to its 1969 glory. [Photo: Mark Sullivan]

Sputnik’s launch “caught the United States with its pants down, and Eisenhower said, ‘That better never happen again,’” recounts Kleinrock when I spoke with him in Room 3420, which is now known as the Kleinrock Internet History Center. “So in January ’58, he formed the Advanced Research Projects Agency (ARPA) within the Department of Defense to support STEM—science, technology, engineering, and mathematics—in United States universities [and] research labs.”

By the mid-1960s, ARPA had provided funding for large computers used by researchers in universities and think tanks around the country. The ARPA official in charge of the financing was Bob Taylor, the key figure in computing history who later ran Xerox’s PARC lab. At ARPA, he had become painfully aware that all those computers spoke different languages and couldn’t talk to each other.

Taylor hated the fact that he had to have separate terminals—each with its own leased communication line—to connect with various remote research computers. His office was full of Teletypes.

In 1969, Teletype terminals like this one were essential computing devices. [Photo: Mark Sullivan]

“I said, oh, man, it’s obvious what to do. If you have these three terminals, there ought to be one terminal that goes anywhere you want to go,” Taylor told the New York Times’s John Markoff in 1999. “That idea is the ARPANET.”

Taylor had an even more practical reason to crave a network. He was regularly getting requests from researchers around the country for funds to buy bigger and better mainframe computers. He knew that much of the computing power the government was funding was being wasted, explains Kleinrock. When a researcher maxed out system resources at SRIin California, for example, another mainframe at MIT might be sitting idle, perhaps after regular business hours on the East Coast.

Or it might be that a mainframe at one site contained some software that might be useful in other places, such as the pioneering ARPA-funded graphics software developed at the University of Utah. Without a network, “if I’m here at UCLA and I want to do graphics, I’m going to go to ARPA—please buy me that machine so I can have it too,’” says Kleinrock. “Everybody wanted everything.” By 1966 ARPA had grown weary of such requests.

Leonard Kleinrock [Photo: Mark Sullivan]

The problem was that all those computers spoke different languages. Back at the Pentagon, Taylor’s computer scientists explained that all those research computers were running different code sets. There was no common networking language, or protocol, by which computers located far away from each other could connect to share content or resources.

That soon changed. Taylor talked ARPA director Charles Herzfeld into allocating a million dollars for R&D into a new network to connect the computers at MIT, UCLA, SRI, and many other sites. Herzfeld got the money by redirecting it from a ballistic missile research program into the ARPA budget. The cost was justified within DoD circles by saying that ARPA was tasked with building a “survivable” network that wouldn’t go down if any specific part was destroyed, perhaps in a nuclear attack.

ARPA brought in Larry Roberts, an old MIT buddy of Kleinrock’s, to manage the ARPANET project. Roberts turned to the work of the British computer scientist Donald Davies and the American Paul Baran for the data carriage techniques they’d invented.

And Roberts soon called on Kleinrock to work on the theoretical aspect of the project. He’d been thinking about the problem of data networking since 1962 when he was still at MIT.

“At MIT, as a graduate student, I decided to address the following problem: I was surrounded by computers and they couldn’t talk to each other, and I knew sooner or later they’d have to,” says Kleinrock. “Nobody was looking at that problem. They were all studying information theory and coding theory.”

Kleinrock’s major contribution to ARPANET was something called queuing theory. Back then, communication links were analog lines you could rent from AT&T. They were circuit-switched lines, meaning that a central switch set up a dedicated connection between a sender and a receiver, whether they were two people engaged in a phone call or a terminal connecting to a distant mainframe. There was a lot of downtime on those circuits when words weren’t being said or when bits weren’t being transferred.

Leonard Kleinrock’s MIT dissertation laid down concepts that informed the ARPANET project. [Photo: Mark Sullivan]

Kleinrock felt this was a wildly inefficient way to set up connections between computers. Queuing theory provides a way for data packets from different communications sessions to share links dynamically. While one stream of packets pauses, another unrelated one might utilize the same link. The packets comprising one communication session (say, an email send) might find their way to the receiver using four different routes. If one route was disabled, the network would route the packets through a different one.

During our conversation in Room 3420, Kleinrock showed me his dissertation on all this, sitting in a red binder on one of the tables. He published his research in book form in 1964.

In this new kind of network, the movement of the data was directed not by a central switch but by devices at the network nodes. In 1969, these network devices were called IMPs, or “internet message processors.” Each machine was a ruggedized, modified version of a Honeywell DDP-516 computer that contained specialized hardware for network control.

The original IMP was delivered to Kleinrock at UCLA on Labor Day in 1969. Today, it stands like a monolith in the corner of Room 3420 at Boelter Hall, where it has been restored to look like it did when it handled the first internet transmission 50 years ago.

“15-hour days every day”

In the fall of 1969, Charley Kline was a graduate student trying to finish his degree in engineering. He was one of a group of graduate students that moved onto the ARPANET project after Kleinrock received government funding to help develop the network. In August, Kline and others on the project worked diligently to prepare the software on UCLA’s Sigma 7 mainframe computer to connect to the IMP. Since there was no standard interface between a computer and an IMP—Bob Metcalfe and David Boggs wouldn’t invent Ethernet until 1973–the group built a 15-foot connection cable from scratch. Now all they needed another computer to communicate with.

Charley Kline [Photo: courtesy of Charley Kline]

SRI was the second research site to get an IMP, in early October. For Bill Duvall, that kicked off a period of intense preparation to get ready for the first transmission from UCLA to SRI’s SDS 940. The UCLA and SRI teams had committed to creating the first successful transmission by October 31, he told me.

“I basically jumped in and designed and implemented the software, and it was one of those intense things that happen in software, which is 15-hour days every day for however long it takes,” he remembers.

As Halloween neared, the pace of the work at both UCLA and SRI ratcheted up. They were ready to go before the deadline arrived.

“Now we had two nodes, and we leased this line from AT&T at the blazing speed of 50,000 bits per second,” says Kleinrock. “So now we’re ready to do it, to log in.”

“The first test that we scheduled was on October 29,” adds Duvall. “It was pre-alpha at that point. And, you know, we thought, well, okay, that’ll give us three testing days to get this up and running.”

On the night of the 29th, Kline was working late. So was Duvall at SRI. The two had planned to attempt the first ARPANET message at night, so that nobody’s work would be affected should one of the computers crash. In Room 3420, Kline sat alone in front of his terminal, an ITT Teletype that was connected to the computer.

Here’s what happened that night—complete with one of the more historic crashes in computing history—in Kline and Duvall’s own words:

Kline: I was logged into the Sigma 7 operating system and then I [ran] the program that I had written, which allowed me to then tell that program to try to send packets to SRI. Meanwhile, Bill Duvall at SRI had run his program to accept incoming connections. And we were also on the phone with each other.

We had a few problems in the beginning. I had a problem with code translation, because our system used EBCDIC (Extended Binary Coded Decimal Interchange Code), which was the standard that IBM used and also the standard that the Sigma 7 used. But the SRI computer used ASCII (American Standard Code for Information Interchange), which also became the standard of the ARPANET and pretty much the world.

So after we got a few of those little bugs worked out, we tried to actually log in . . . and you did that by typing the word “login.” That [SRI] system had been programmed to be smart, so that it recognized valid commands. And if you had it in the advanced mode, when you typed the “L” and the “O” and the “G,” it recognized that you must be meaning to type “LOGIN” and it would type the “I N” for you. So I typed the L.

I was on the phone [with Duvall at SRI] and said, ‘Did you get the L?’ And he said, ‘Yeah.’ And I said that I saw that the “L” come back and print on my terminal. And I typed the “O” and he said, “Got the O.” And I typed the G, and he said, “Wait a minute, my system crashed.”

Bill Duvall [Photo: courtesy of Bill Duvall]

Duvall: After a couple of letters, there was a buffer overflow issue. That was a very simple thing to detect and fix, and basically it came right back up and it worked. The only reason I mentioned that is that, in my view, this whole thing is not about that. This is about the fact that the ARPANET works.

Kline: He had a little minor bug, and it took him 20 minutes or whatever to fix it and try it again. He had to make a change in some software. I had to double-check some of my software. He called me back, and we tried it again. So we started over and I typed the L and the O and the G, but this time I got back the “I N.”

“Just engineers working”

The initial connection happened at 10:30 PT in the evening. After that, Kline was able to sign into an account on the SRI computer that Duvall had created for him and start running programs, using the system resources of a computer 350 miles up the coast from UCLA. In a small way, ARPANET’s mission had been accomplished.

“By that time, it was getting late, so I went home,” Kline told me.

A plaque in room 3420 explains what happened there. [Photo: Mark Sullivan]

The team knew it had succeeded, but didn’t dwell on the magnitude of its accomplishment: “It was just engineers working,” says Kleinrock. Duvall saw the October 29 connection as just one step in the larger challenge of networking computers. Where Kleinrock’s work focused on how data packets could be directed around a network, the SRI researchers had worked on how a packet is constructed and how the data inside it is organized.

“This was basically where the paradigm that we see now on the internet with linked documents and things like that was first developed,” Duvall says. “We always envisioned that we would have a series of interconnected workstations and interconnected people. We called them knowledge centers in those days, because we were academically oriented.”

Within a few weeks of Kline and Duvall’s first successful communication, the ARPA network extended to computers at UC Santa Barbara and the University of Utah. And ARPANET grew from there, through the ’70s and much of the 1980s, connecting more and more government and academic computers. And later the concepts developed in ARPANET would be applied to the internet we know today.

Back in 1969, a UCLA press release touted the new ARPANET. “As of now, computer networks are still in their infancy,” it quoted Kleinrock as saying. “But as they grow up and become more sophisticated, we will probably see the spread of ‘computer utilities,’ which, like present electric and telephone utilities, will service individual homes and offices across the country.”

That concept sounds a little quaint now that data networks reach far past homes and offices and down to the smallest internet-of-things devices. But Kleinrock’s statement about “computer utilities” was remarkably prescient, especially given that the modern, commercialized internet did not come into being until decades later. The idea remains fresh in 2019, even as computing resources are well on their way to being as ubiquitous and easy to take for granted as electricity.

Maybe anniversaries like this one are good opportunities to not only remember how we got to this highly connected era, but also look out into the future—like Kleinrock did—to think about where the network might be headed next.

eight-years-after-its-launch,-twitch-is-getting-a-slightly-new-look

Twitch is getting a new look ahead of this year’s Twitchcon in North America, bringing a new logo, new purple color, and new font to the streaming site’s branding.

The new design has been a year in the making, says Byron Phillipson, the company’s executive creative director. The team went deep, he continued, digging into what makes Twitch Twitch. The idea was to future-proof the brand and to better represent its creator community. “I think the important feature is that we’re not tearing everything down,” says Phillipson. “Twitch is a brand that is loved by a lot of people, and we want to be very considerate to our community.” (He did add that the number of people with the Twitch glitch tattoo made the team give the refresh some extra consideration.)

The nuts and bolts: there is a new purple; there is a new font called Roobert, which is based on the Moog synthesizer typeface; there are around 20 new colors; and there is a new glitch. The net effect is that Twitch’s logo looks sleeker and more modern. It’s less blocky and way less 2011 than its predecessor. The new font is also accessible, says Tricia Choi, director of design systems, and there are plans to implement a high contrast feature; to that end, the company is hiring a program manager for accessibility and inclusive design, which points to its ambitions in that arena.

The change comes with a new slogan — “You’re already one of us” — which is meant to both welcome new creators to the platform who aren’t necessarily the company’s core audience of gamers and to showcase the variety of content that is already on the platform.



The new look will be paired with a large ad campaign featuring some of the platform’s biggest names to introduce it to the world. Phillipson says the company plans to bring “new folks to Twitch through the vehicle of our creators.” That seems to be at the heart of the redesign. It feels like the new look is meant to draw people in while keeping the focus squarely on the platform’s community.

During the research process, the company tried to identify a framework to guide the redesign. “The monster that we set out to fight in our mission was the fear of irrelevance,” says Phillipson. “Everything we do needs to be in service of making sure that the folks that are on our platform — our community — really feel like they matter.”

four-years-in-startups

Depending on whom you ask, 2012 represented the apex, the inflection point, or the beginning of the end for Silicon Valley’s startup scene—what cynics called a bubble, optimists called the future, and my future co-workers, high on the fumes of world-historical potential, breathlessly called the ecosystem. Everything was going digital. Everything was up in the cloud. A technology conglomerate that first made its reputation as a Web-page search engine, but quickly became the world’s largest and most valuable private repository of consumer data, developed a prototype for a pair of eyeglasses on which the wearer could check his or her e-mail; its primary rival, a multinational consumer-electronics company credited with introducing the personal computer to the masses, thirty years earlier, released a smartphone so lightweight that gadget reviewers compared it to fine jewelry.

Technologists were plucked from the Valley’s most prestigious technology corporations and universities and put to work on a campaign that reëlected the United States’ first black President. The word “disruption” proliferated, and everything was ripe for or vulnerable to it: sheet music, tuxedo rentals, home cooking, home buying, wedding planning, banking, shaving, credit lines, dry-cleaning, the rhythm method. It was the dawn of the unicorns: startups valued, by their investors, at more than a billion dollars. The previous summer, a prominent venture capitalist, in the op-ed pages of an international business newspaper, had proudly declared that software was “eating the world.”

Not that I was paying any attention. At twenty-five, I was working in publishing, as an assistant to a literary agent, sitting at a narrow desk outside my boss’s office, frantically e-mailing my friends. The year before, I’d received a raise, from twenty-nine thousand dollars to thirty. What was my value? One semester of an M.F.A. program; fifteen hundred chopped salads, after taxes. I had a year left on my parents’ health insurance.

I was staving off a thrumming sense of dread. An online superstore, which had got its start, in the nineties, by selling books on the World Wide Web, was threatening to destroy publishing with the tools of monopoly power: pricing and distribution. People were reeling from the news that the two largest publishing houses, whose combined value pushed past two billion dollars, had agreed to merge. In the evenings, at dive bars, I met with other editorial and agency assistants, all women, all of us in wrap dresses and cardigans, for whiskey-and-sodas and the house white. Publishing had failed to innovate, but surely we—the literary, the passionate, lovers and defenders of human expression—couldn’t lose?

One afternoon, at my desk, I read an article about a startup, based in New York, that had raised three million dollars to bring a revolution to publishing. It was building an e-reading app for mobile phones which operated on a subscription model. The pitch—access to a sprawling library of e-books for a modest monthly fee—should have seemed too good to be true, but the app was a new concept for publishing, an industry where it seemed as if the only ways to have a sustainable career were to inherit money, marry rich, or wait for our superiors to defect or die.

My interviews with the e-book startup were so casual that at a certain point I wondered if the three co-founders just wanted to hang out. They were younger than I was but spoke about their work like industry veterans, and were generous with unsolicited business advice. I wanted, so much, to be like them. I joined at the beginning of 2013.

The job, which had been created for me, was a three-month trial run. As a full-time contractor, I would be paid twenty dollars an hour, with no benefits. Still, the annual salary amounted to forty thousand dollars. On my start date, I arrived at the office, a loft a block from Canal Street, to find a stack of hardcover books about technology, inscribed by the founders and stamped with a wax seal of the company logo: a mollusk, unavoidably yonic, with a perfect pearl.

The e-book startup had millions of dollars in funding, but the app was still in “private alpha,” used by only a few dozen friends, family members, and investors. For the first time in my career, I had some semblance of expertise. The founders asked for my opinions on the app’s user interface and the quality of the inventory, and on how we could best ingratiate ourselves with the online reading communities, the largest of which would soon be acquired by the monopolistic online superstore. One afternoon, the C.E.O. summoned the other two founders and the staff of three to a conference room to practice his presentation to publishers. He opened by saying that this was the era of the sharing economy. Music, movies, television, retail, and transportation had been disrupted. Apparently, the time had come for books. He flipped to a slide that displayed the logos of various successful subscription platforms, with ours at the center. “Hemingway” was misspelled in the pitch deck: two “m”s.

After the first few weeks, it seemed that the founders were paying me mostly to look for new office furniture and order them snacks: single-serving bags of sliced apples, tiny chocolate bars, cups of blueberry yogurt. “She’s too interested in learning, not doing,” the C.E.O. wrote. He meant to send the message to the two other co-founders, but mistakenly posted it in the company chat room. He apologized sincerely, while I looped the words in my head. I had not understood that the founders hoped I would make myself indispensable. I had never heard the tech incantation “Ask forgiveness, not permission.”

Soon afterward, the co-founders informed me that the areas where I could add value would not be active for some time. They assumed that I wanted to continue working in tech, and I didn’t disabuse them of this notion.

One of the e-book startup’s co-founders helped arrange an interview at an analytics startup in San Francisco. The role was in customer support, which I was not particularly excited about, but it was an entry-level position that required no programming knowledge. As a sociology major with a background in literary fiction and three months of experience in snack procurement, I assumed I was not in a position to be picky.

The night before the interview, in a bedroom I’d rented through a millennial-friendly platform for sleeping in strangers’ bedrooms, I read puff pieces about the analytics startup’s co-founders, now twenty-four and twenty-five, with one Silicon Valley internship between them and a smart, practical dream of a world driven by the power of Big Data. A renowned seed accelerator in Mountain View had offered funding and connections in exchange for a seven-per-cent stake, and the C.E.O. and the technical co-founder left their college in the Southwest to join. The startup had twelve million dollars in venture funding, thousands of customers, and seventeen employees.

In the office, the manager of the Solutions team, a hirsute man with a belly laugh, presented me with a series of questions and puzzles. A wiry sales engineer showed me how to write a function that rearranged the characters in a long string of letters. The technical co-founder watched me complete a reading-comprehension section from the LSAT.

The offer included company-paid medical and dental coverage and a starting salary of sixty-five thousand dollars a year. The Solutions manager did not mention equity, and I didn’t know that early access to it was the primary reason people joined startups. Eventually, the company’s in-house recruiter recommended that I negotiate a small stake, explaining that all the other employees had one.

Friends at home told me that they were excited for me, then asked whether I was sure I was making the right decision. The media tended to cover tech as a world of baby-faced nerds with utopian ambitions and wacky aesthetic preferences, but to my friends it was a Wall Street sandbox. I stuck to the narrative that working in analytics would be an experiment in separating my professional life from my personal life. Maybe I would start the short-story collection I had always wanted to write. Maybe I would take up pottery. I could learn to play the bass. I could have the sort of creative life that creative work would not sustain. It was easier to fabricate a romantic narrative than to admit that I was ambitious—that I wanted my life to pick up momentum.

Startups in New York were eager to create services for media and finance; software engineers in the Bay Area were building tools for other software engineers. The analytics platform enabled companies to collect customized data on their users’ behavior, and to manipulate the data in colorful, dynamic dashboards. I’d had some guilt about the opportunism of the e-book startup, but had no qualms about disrupting the Big Data space. It was thrilling to see a couple of twentysomethings go up against middle-aged leaders of industry. It looked like they might win.

I was employee No. 20, and the fourth woman. The three men on the Solutions team wore Australian work boots, flannel, and high-performance athletic vests; drank energy shots; and popped Vitamin B in the mornings. The Solutions manager assigned me an onboarding buddy, whom I’ll call Noah—employee No. 13—a curly-haired twenty-six-year-old with a forearm tattoo in Sanskrit. He struck me as the kind of person who would invite women over to listen to Brian Eno and then actually spend the night doing that. I spent my first few weeks with Noah carting around an overflowing bowl of trail mix and a rolling whiteboard, on which he patiently diagrammed how cookie tracking worked, how data were sent server-side, how to send an HTTP request. He gave me homework and pep talks. Our teammates handed me beers in the late afternoon. I was happy; I was learning. The first time I looked at a block of code and understood what was happening, I felt like a genius.

We treated the C.E.O., a twenty-four-year-old with gelled, spiky hair, like an oracle. The child of Indian immigrants, he mentioned, not infrequently, his parents’ hope that he would finish his undergraduate degree. Instead, he was responsible for other adults’ livelihoods.

On Tuesdays at noon, we would roll our desk chairs into the middle of the office and flank him in a semicircle, like children at a progressive kindergarten, for the weekly all-hands. Packets containing metrics and updates from across the company were distributed. We were doing well. An I.P.O. seemed imminent. The engineers had built an internal Web site to track revenue, which meant that we could watch the money come in in real time. The message was clear and intoxicating: society valued our contributions and, by extension, us. Still, the C.E.O. motivated us with fear. “We are at war,” he would say, his jaw tense. We would look down at our bottles of kombucha and nod gravely. At the end of the meeting, the packets were gathered up and shredded.

Camaraderie came easily. We all felt indispensable. Failures and successes reflected personal inadequacies or individual brilliance. Slacking off was not an option. Research did not necessarily support a correlation between productivity and working hours beyond a reasonable threshold, but the tech industry thrived on the idea of its own exceptionalism; the data did not apply to us. We were circumventing the fussiness and the protocol of the corporate world. As long as we were productive, we could be ourselves.

I did not want to be myself. I envied my teammates’ sense of entitlement, their natural ease. I began wearing flannel. I incorporated B Vitamins into my regimen and began listening to E.D.M. while I worked. The sheer ecstasy of the drop made everything around me feel like part of a running-shoe ad or a luxury-car commercial, though I couldn’t imagine driving to E.D.M. Was this what it felt like to hurtle through the world in a state of pure confidence, I wondered—was this what it was like to be a man? I would lean against my standing desk and dance while pounding out e-mails, bobbing in solidarity with the rest of the team.

Each new employee, regardless of department, was required to spend a few days at the Solutions cluster, answering support tickets––like working the mail room in Hollywood. The C.E.O. believed that this experience built empathy for our customers. It did not necessarily build empathy for Support. The engineers and salespeople tossed off replies to customer inquiries and rolled their eyes at developers who did not understand our product. The engineers had been hired at two or three times my salary, and their privileged position in the industry hierarchy should have exempted them from such tedium. It wasn’t exactly that they harbored contempt for our users; they just didn’t need to think about them.

In theory, the tool was straightforward. But when users—engineers and data scientists, almost all of them men—encountered problems, they would level accusations and disparage the company on social media. My job was to reassure them that the software was not broken. Looking at their source code or data, I explained where things had gone haywire. Some days, helping men untangle problems that they had created, I felt like a piece of software myself, a bot: instead of being an artificial intelligence, I was an intelligent artifice, an empathetic text snippet or a warm voice, giving instructions, listening comfortingly. Twice a week, I hosted live Webinars for new customers. I asked my parents to join, as if to prove that I was doing something useful, and, one morning, they did. My mother e-mailed afterward. “Keep that perky tone!” she wrote.

After two months, the Solutions manager took me for a walk around the neighborhood. We passed a strip club, a popular spot for parties during developer conferences, which my co-workers claimed had a superlative lunch buffet. We circumvented people sleeping on steaming grates. He looked at me with kind eyes, as if he had given birth to me. “We’re giving you an extra ten thousand dollars,” he said. “Because we want to keep you.”

The simplest way to solve users’ problems was by granting the Solutions team access to all our customers’ data sets. This level of employee access—some of us called it God mode—was normal for the industry, common for small startups whose engineers were overextended. It was assumed that we would look at our customers’ data sets only out of necessity, and only when our doing so was requested by the customers themselves; that we would not, under any circumstances, look up the profiles of our lovers and family members and co-workers in the data sets belonging to dating apps and shopping services and fitness trackers and travel sites. It was assumed that if a publicly traded company was using our software we would resist buying or selling its stock. Our tiny startup operated on good faith. If good faith failed, there was a thorough audit log of all employee behavior. The founders tracked the customer data sets we looked at and the specific reports we ran.

Early in the summer of 2013, news broke that a National Security Agency contractor had leaked classified information about the U.S. government’s surveillance programs. The N.S.A. was reading private citizens’ personal communications, crawling through people’s Internet activity by gathering cookies. The government had penetrated and pillaged the servers of global technology companies. Some commentators said that tech companies had, essentially, collaborated, by creating back doors that the government could access. Others defended the tech companies’ innocence. In the office, we never talked about the whistle-blower—not even during happy hour.

I was making seventy-five thousand dollars a year. It felt like getting away with something. Even so, when I ran out of work to do on nights and weekends, I felt free, invisible, and lonely. The city’s green spaces overflowed with couples jogging next to each other and cycling on bikes with matching panniers. I spent hours in bed, drinking coffee and thumbing my phone. On a dating app, I made plans with two men, both of whom seemed boring and benign, before deciding that I couldn’t go through with it. I deleted the app. A few days later, one of them messaged me on a social network everyone hated. I tried to reverse engineer how he’d identified me, but couldn’t.

Noah took me under his wing. He had grown up in Marin, and had moved back to California after college, hoping to live a bohemian life. Meeting his friends was like swinging open the gate to a version of the Bay Area I thought no longer existed: Here were chefs and social workers, academics and musicians, dancers and poets. Everyone was inventing a way to live. Some women instituted gender reparations with their partners, redistributing the housework to compensate for decades of patriarchal control. Atheists bought tarot decks and went to outposts in Mendocino to supervise one another through sustained, high-dose LSD trips. They went on retreats to technology-liberation summer camps, where they locked up their smartphones and traded their legal names for pseudonyms evoking berries and meteorological phenomena. I attended a spa-themed party at a communal house and wandered the grounds in a robe, avoiding the hot tub—a sous-vide bath of genitalia.

At a birthday party north of the Panhandle, Noah’s roommate, Ian, sat down beside me and struck up a conversation. Ian was soft-spoken and whistled slightly when he pronounced the letter “s.” He had static-electricity hair and a sweet, narrow smile. He asked questions and then follow-up questions, a novelty. It took a while for me to steer the conversation to him. He worked in robotics, I eventually learned, programming robotic arms to do camerawork for films and commercials. The studio where he worked had recently been acquired by the search-engine giant down in Mountain View. One of the founders had been sent a set of three-hundred-thousand-dollar speakers as a welcome gift; when a pallet of electric skateboards arrived at the studio, Ian and his co-workers knew that the deal had closed.

Noah had been with the startup for a year, and was preparing for his annual review. Before the meeting, he sent me his self-assessment and a memo he had written, asking what I thought. As an early employee, Noah was often the recipient of grievances and concerns from teammates and customers. In the memo, he pushed for changes to the product and in the company culture. He asked for a title change, more autonomy, a raise, and an increase in stock options. He presented the number of hires he’d referred, the profits of the accounts he and his referrals had acquired and nurtured, the amount of money he calculated he had generated for the company. He wanted to become a product manager and run his own team. He wanted equity commensurate with his contributions, about one per cent of the company. He framed it as an ultimatum.

Giving the chief executive an ultimatum was unprofessional, crazy, even for one of the best employees at the company. On the other hand, it was a company of twentysomethings run by twentysomethings. I read Noah’s memo twice, then I wrote and said it was risky but not unreasonable. I hoped they would give him everything he wanted. A few days later, on my way to work, I got a text message from Noah telling me that he had been fired.

At the office, the cluster felt like a funeral home. “They didn’t even try to negotiate with him,” a sales engineer said. “They just let one of our best people go, all because nobody here has any management experience.”

The early members of the Solutions team were corralled into an unscheduled meeting with the C.E.O. He told us to sit down, standing at the front of the room, arms folded. “If you disagree with my decision to fire him, I’m inviting you to hand in your resignation,” he said, speaking slowly. He looked around the table, addressing each of us individually.

“Do you disagree with my decision?” he asked the account manager.

“No,” the account manager said, raising his palms as if at gunpoint.

“Do you disagree with my decision?” the C.E.O. asked the sales engineer.

“No,” the sales engineer said. His eyelids fluttered. He looked ill.

“Do you disagree with my decision?” the C.E.O. asked me. I shook my head, face hot. Of course not, I lied.

Later, the C.E.O. denied that this meeting had taken place—it wasn’t something he would do, he said. At the time, it had seemed perfectly in character.

Ian and I biked through the city, drinking seltzer and eating avocado sandwiches on the seawall. We walked to the top of Bernal Hill and watched the fog curl around Sutro Tower; we skinny-dipped in Tomales Bay. In the winter of 2013, Ian took me to a party at the offices of a hardware startup operating out of an ivy-clad brick warehouse in Berkeley. Drones buzzed over a crowd of young professionals wearing sensible footwear. Ian disappeared with a co-worker to investigate a prototype line of self-assembling modular furniture, leaving me in a circle with a half-dozen other roboticists. The men discussed their research. One was trying to teach robots to tie different kinds of knots, like Boy Scouts. I asked if he was a graduate student. No, he said, squinting at me. He was a professor.

Talk turned to self-driving cars. How plausible were they, really? I asked. I had finished my beer, and I was bored. I also wanted to make sure everyone knew that I wasn’t just an engineer’s girlfriend who stood around at parties waiting for him to finish geeking out—though that’s exactly what I was doing. The group turned toward me, the Scout leader looking amused.

“What did you say you do?” one of them asked.

I said that I worked at a mobile-analytics company, hoping they would assume I was an engineer.

“Ah,” he said. “And what do you do there?”

Customer support, I said. He glanced at the others and resumed the conversation.

On the train home, I leaned into Ian and recounted the interaction. What sexists, I said. How dare they be so dismissive, just because I was a woman—just because I did customer support and was considered nontechnical. Ian cringed and pulled me closer. “You’re not going to like this,” he said. “But you were trying to talk shit about self-driving cars with some of the first engineers ever to build one.”

In the spring of 2014, the analytics startup released a new feature, a chart called Addiction. It displayed the frequency with which individual users engaged, synthesized on an hourly basis. Every company wanted to build an app that users were looking at throughout the day. Addiction, which quantified this obsession, was an inspired product decision by the C.E.O., executed brilliantly by the C.T.O.

Our communications director had left for a larger tech company with family-friendly benefits and policies, and was not replaced. With her departure, I became the de-facto copywriter. To promote Addiction, I ghostwrote an opinion piece for the C.E.O., published on a highly trafficked tech blog, that described the desirability of having people constantly returning to the same apps. “If you work for a SaaS”—software as a service—“company and most users are lighting up your Addiction report by using your app for 10 hours every day, you’re doing something very, very right,” I wrote, like the careerist I had become.

The novelty of the product was exciting, but the premise and the name made me uneasy. We all treated technology addiction as though it were inevitable. The branding vexed me, as I told a friend. It was as if substance abuse were an abstract concept, something that people had only read about in the papers. The friend listened while I ranted. “I hear you,” he said. The question of addiction, he told me, was already a big thing in gaming: “It’s nothing new. But I don’t see any incentive for it to change. We already call customers ‘users.’ ”

One evening, a group of us stayed late at work to watch a science-fiction movie about hackers who discover that society is a simulated reality. It was the C.E.O.’s favorite film, released the year he turned eleven. The movie didn’t just make the hackers look sexy—it glamorized circumvention, the outcast’s superiority, and omniscience. The C.E.O. sat with his laptop open, working as he watched.

At the beginning of my tenure—a decade earlier, in startup time—the C.E.O. had invited me and an entrepreneur friend of his for late-night pizza in North Beach. The walls of the pizzeria were covered in stickers, like a laptop. We ordered grandma slices and cups of water, and perched on stools in the back, chatting, almost like friends. The men insisted on getting me home safely, and hailed a cab. I started saying my goodbyes, but they got into the back seat. Seeing me home would add another hour to their trip, I protested. They buckled their seat belts. As we glided through the city, I wondered if the cab ride was an act of chivalry or a test. I felt like a prop in their inside joke. At my apartment steps, I turned back to wave, but the car was already gone.

The employees tried to be the C.E.O’s friends, but we were not his friends. He shut down our ideas and belittled us in private meetings; he dangled responsibility and prestige, only to retract them inexplicably. We regularly brought him customer feedback, like dogs mouthing tennis balls, and he regularly ignored us. He was expensive to work for: at least two of my co-workers met with therapists to talk through their relationship with him.

Still, I was reluctant to entertain the idea that the C.E.O., who’d been under scrutiny from venture capitalists and journalists for years, was egomaniacal or vindictive. I was always looking for some exculpatory story on which to train my sympathy. By the time I started looking for other jobs, I considered my blind faith in ambitious, aggressive, arrogant young men from America’s soft suburbs a personal pathology. But it wasn’t personal at all; it had become a global affliction.

In the summer of 2014, I went for an interview at a startup that hosted a platform for open-source software development, housed in a former dried-fruit factory by the ballpark. A security guard wearing a shirt with the company logo and the words “SECRET SERVICE” showed me to the waiting room, a meticulous replica of the Oval Office. The wallpaper was striped yellow and cream. An American flag stood to the side of the Resolute desk, behind which an animation of clouds passing over the National Mall played on a screen. The rug, a deep Presidential blue, was emblazoned with the startup’s mascot, a tentacled, doe-eyed octopus-cat crossbreed, holding an olive branch above the words “IN COLLABORATION WE TRUST.” The company had attracted a hundred million dollars in venture funding, and appeared to be spending it the way most people would expect three men in their twenties to spend someone else’s money.

The offer letter arrived. “We’re expecting big things from you, ourselves, and for the company,” it read. “You should be justifiably proud.” Mostly, I was burned out. The open-source company was famous for its culture, which atypically emphasized work-life balance. For years, in emulation of the tenets of open source—transparency, collaboration, decentralization—the organization had been nonhierarchical, and the majority of employees worked remotely. Until recently, employees had named their own compensation, determined their own priorities, and come to decisions by consensus, including some related to interior design. As my host gave me a tour through the office, I noted juggling balls on a desk cluster, a children’s play area, and a barefoot employee playing video games. People shook cocktails at the company bar. There was an indoor picnic area with Adirondack chairs and plastic grass, an orange shipping container—a visual pun on “shipping code”—with a gaming room inside, and a row of so-called coder caves: dark, cushioned booths designed for programmers who worked best under the conditions of sensory deprivation.

The job was a customer-support role, but the title listed in the offer letter was, in homage to the company mascot, Supportocat. I set that humiliation aside. My co-workers at the analytics startup had made fun of me for considering a “life-style job”––it entailed a ten-thousand-dollar pay cut––but I liked the company’s utopianism. The open-source startup hosted the largest collection of source code in the world, including a public Web site with millions of open-source software projects. Excitable tech journalists sometimes referred to it as the Library of Alexandria, but for code. It was six years old, with two hundred employees and no serious competitors. The social network everyone hated and the United States government both used its tools.

For years, it had seemed that the company could do no wrong, but in the spring of 2014 the first woman on the engineering team—a developer and designer, a woman of color, and an advocate for diversity in tech—had come forward with a spate of grievances. The startup, she claimed, was a boys’ club. Colleagues condescended to her, reverted and erased her code, and created a hostile work environment. She described a group of male employees watching female employees hula-hoop to music in the office, leering as if they were at a strip club. The developer’s story was picked up by the media and went viral. The company conducted an investigation. An implicated founder stepped down; another moved to France.

For the first time, tech companies were beginning to release internal diversity data. The numbers were bleak. The people building the world’s new digital infrastructure looked nothing like the people using it. There was an ongoing fight about the “pipeline problem”––the belief, apparently divorced from conversations about power or systemic racism, that there simply weren’t enough women and underrepresented minorities in STEM fields to fill open roles. The situation at the open-source startup wasn’t the first instance of sexism and racism in the tech industry, but it was among the first to receive national attention. It made me wary, but I wondered if there might be some benefit to joining an organization forced to confront discrimination head on. Call it self-delusion or naïveté; I considered these calculations strategic.

During my first month in the job, there was a lot of chatter in the office about a group of Internet trolls who had mounted a harassment campaign against women in gaming. The trolls had flooded social networks, spouting racist, misogynistic, and reactionary rhetoric. They had been banned from nearly every platform, and had responded by citing the First Amendment and crying censorship. On our platform, they thrived.

The trolls maintained a repository of resources and data on women they were targeting—photos, addresses, personal information. The trolls’ identities, meanwhile, were impossible to trace. My co-workers debated how seriously to take the campaign. A popular narrative about trolls was that they were just a bunch of lonely men in their parents’ basements, but this looked like a coördinated effort. The repository included e-mail templates and phone-call scripts. It was, my teammates agreed, unusual to see them so organized.

In October, I flew to Phoenix for an annual conference of women in computing, established in honor of a female engineer who had helped develop military technologies during the Second World War. I was not really a woman in computing—more a woman around computing; a woman with a computer—but I was curious, and the open-source startup was a sponsor. The company put employees up in a boutique hotel with a pool and a Mexican restaurant.

On the first night, my co-workers, having flown in from Portland, Toronto, Boulder, and Chicago, gathered over margaritas and bowls of guacamole. Many hadn’t seen one another since the startup’s gender-discrimination crisis. I hovered on the periphery, hoping that the engineers would adopt me. Some of them had unnaturally colored hair and punk-rock piercings, signalling industry seniority as much as subcultural affiliation. I had no idea what it would be like to be a woman in tech whose skill set was respected. I was disappointed to learn that it wasn’t dissimilar from being a woman in tech whose skill set wasn’t.

For the most part, the other women at the open-source startup were glad that the years-long party seemed to be winding down. Leadership was scrambling to tidy up after the discrimination scandal: installing a human-resources department; disabling the prompt “/metronome,” which dropped an animated gif of a pendulous cock into the all-company chat room; rolling up the “In Meritocracy We Trust” flags. In retrospect, the adherence to meritocracy should have been suspect at a prominent international company that was overwhelmingly white, male, and American, and had fewer than fifteen women in engineering.

For years, my co-workers told me, the absence of an official organizational chart had given rise to a shadow chart, determined by social relationships and proximity to the founders. As the male engineers wrote manifestos about the importance of collaboration, women struggled to get their contributions reviewed and accepted. The company promoted equality and openness until it came to stock grants: equity packages described as “nonnegotiable” turned out to be negotiable for people who were used to successfully negotiating. The name-your-own-salary policy had resulted in a pay gap so severe that a number of women had recently received corrective increases of close to forty thousand dollars. No back pay.

In the convention center, I felt out of place among the computer-science majors, then ashamed to have impostor syndrome at a conference designed to empower women in the workforce. At a Male Allies plenary panel, a group of engineers circulated bingo boards among attendees. In each square was a different indictment: “Refers to a feminist as aggressive”; “ ‘That would never happen in my company’ ”; “Asserts other man’s heart is in the right place”; “Says feminist activism scares women away from tech”; “Wearables.” Wearables: the only kind of hardware men could imagine women caring about. At the center of the bingo board was a square that just said “Pipeline.”

The male allies, all trim, white executives, took their seats and began offering wisdom on how to manage workplace discrimination. “The best thing you can do is excel,” a V.P. at the search-engine giant, whose well-publicized hobby was stratosphere jumping, said. “Don’t get discouraged,” another said. “Just keep working hard.” Women bent over their bingo boards, checking off boxes.

Going into work was not mandatory, but I still wanted to be a part of something. In the office, I staked out an unclaimed standing desk among a cluster of engineers and left my business cards next to the monitor. I took meetings in an area atop the indoor shipping container, on couches where an engineer was rumored to have lived for several months, before being busted by our secret service.

I was employee No. 230-something. I had no trouble identifying the early employees. I saw my former self in their monopolization of the chat rooms, their disdain for the growing sales team, their wistfulness for the way things had been. Sometimes I would yearn for their sense of ownership and belonging—the easy identity, the all-consuming feeling of affiliation. And then I would remind myself, There but for the grace of God go I.

I stopped going into the office. Support met once a week, for an hour, over videoconference. I prepared for these meetings by brushing my hair, closing the curtains to the street, and tossing visible clutter onto my bed and covering it with a quilt. I would log in and lean into my laptop, enjoying the camaraderie and warmth of a team. For an hour, my studio apartment would fill with laughter and chatter, conversation tripping when the software stalled or delayed. Then I would stand up, stretch, replace the tape over my laptop camera, and open the curtains, readjusting to the silence in my room.

Some days, clocking in to work was like entering a tunnel. I would drop a waving-hand emoji into the team chat room, answer a round of customer tickets, read e-mail, process a few copyright takedowns, and skim the internal social network. In the chat software, I moved from channel to channel, reading information and banter that had accumulated overnight in other time zones. After repeating this cycle, I would open a browser window and begin the day’s true work of toggling between tabs.

Platforms designed to accommodate and harvest infinite data inspired an infinite scroll. I careened across the Internet like a drunk: small-space decoration ideas, author interviews, videos of cake frosting, Renaissance paintings with feminist captions. I read industry message boards and blogs, looking for anything to hold my interest. I learned that the e-book startup had been acquired by the search-engine giant, which had shut down its app. I watched videos of a xenophobic New York City socialite, whose greatest accomplishment was playing a successful businessman on reality television, launch a Presidential bid. I watched marriage proposals and post-deployment reunions and gender reveals: moments of bracing intimacy among people I would never know. I searched for answers, excuses, context, conclusions: “Text neck.” “Vitamin D deficiency.” “Rent calculator.” “What is mukbang?” Time passed, inevitably and unmemorably, in this manner.

By the beginning of 2016, corners of the open-source platform had become increasingly vicious and bizarre. People posted content claiming to be members of a terrorist organization; people posted content to dox government employees and stalk our staff. The company received a note so menacing that the office closed for a day.

A far-right publication ran a blog post about our V.P. of social impact––the woman responsible for managing diversity and inclusion programs—zeroing in on her critique of initiatives that tended to disproportionately benefit white women. The post was accompanied by a collage of octopus-cats, under the headline “ANTI-WHITE AGENDA REVEALED.” The article sparked a furor in the comments section, which filled up with conspiratorial statements about Marxism and Hollywood, liberal victimhood, reverse racism, and the globalist agenda. The comments snowballed into threats. Some of the threats were specific enough that the company hired security escorts for the targeted employees.

Later, I mentioned to a co-worker that all Internet harassment now seemed to follow the same playbook: the methods of the far-right commenters were remarkably similar to those of the troll bloc that, eighteen months earlier, had targeted women in gaming. It was bizarre to me that the two groups would have the same rhetorical and tactical strategies. My co-worker, a connoisseur of online forums and bulletin boards, looked at me askance. “Oh, my sweet summer child,” he said. “Those groups are not different. They are absolutely the same people.”

San Francisco was tipping into a full-blown housing crisis. Real-estate brochures offered building owners enticements to flip. “Hi, neighbor!” they chirped. “We have considered and ready buyers eager to invest in your neighborhood.” There was a lot of discussion, particularly among the entrepreneurial class, about city-building. Everyone was reading “The Power Broker”—or, at least, reading summaries of it. Armchair urbanists blogged about Jane Jacobs and discovered Haussmann and Le Corbusier. They fantasized about special economic zones. An augmented-reality engineer proposed a design to combat homelessness which looked strikingly like doghouses. Multiple startups raised money to build communal living spaces in neighborhoods where people were getting evicted for living in communal living spaces.

There was a running joke that the tech industry was simply reinventing commodities and services that had long existed. Cities everywhere were absorbing these first-principles experiments. An online-only retailer of eyeglasses found that shoppers appreciated getting their eyes checked; a startup selling luxury stationary bicycles found that its customers liked to cycle alongside other people. The online superstore opened a bookstore, the shelves adorned with printed customer reviews and data-driven signage: “Highly rated: 4.8 stars & above.” Stores like these shared a certain ephemerality, a certain snap-to-grid style. They seemed to emerge overnight: white walls and rounded fonts and bleacher seating, matte simulacra of a world they had replaced.

Scale bred homogeneity. Half the knowledge workers I encountered had the same thin cashmere sweaters I did, and the same lightweight eyeglasses. Some of us had the same skin tints, from the same foundation. We complained of the same back problems, induced by the same memory-foam mattresses. In apartments decorated with the same furniture and painted the same shades of security-deposit white, we placed the same ceramic planters, creating photogenic vignettes with the same low-maintenance plants.

In the late fall, I went home to Brooklyn, reporting into work from my childhood bedroom, making myself available between six in the morning and early afternoon. New York held my life, but the city I had grown up in no longer existed. I had been gone for almost four years, and there were now so many co-working spaces and upscale salad shops; so many anemic new buildings with narrow balconies. I wondered if anyone actually wanted these things, and, if so, who they were. Whenever I asked, friends gave the same answer: finance guys, tech bros. It was the first time I had heard the two groups referred to in the same breath, not to mention with such frequency.

Being in New York compounded a feeling I had been experiencing, of profound dissociation from my own life. I knew, as I wandered through museums with friends and video-chatted with Ian, that I needed to leave the tech industry. I was no longer high on the energy of being around people who so easily satisfied their desires––on the feeling that everything was just within reach. The industry’s hubris and naïveté were beginning to grate; I had moral, political, and personal misgivings about Silicon Valley’s accelerating colonization of art, work, everyday life. I could not have anticipated––three weeks before a Presidential election that would convince me it was safer to have a foothold, however small and tenuous, inside the walls of power––that leaving would take me more than a year.

I went with my friends to see a performance in Fort Greene by a musician and choreographer we knew. I had met him shortly after my college graduation; he was the first person I knew who was building an artistic life from scratch. The show, years in the making, had a four-night run. Onstage, dancers and musicians guided large slabs of foam into architectural arrangements, surrounded by instruments, pedals, and wires. The choreographer slipped an electric guitar over his shoulder. Light followed him as he stepped delicately across the stage, singing—hair flopping across his brow, concentration and joy all over his face. I had forgotten what it felt like to want something; to feel that what I had, or was, mattered. I cried a little, wiping my nose on the program, stung by an old loss that suddenly felt fresh.

Afterward, the performers stood in the theatre lobby, radiant, receiving bouquets wrapped in butcher paper. People in structurally inventive clothing lingered over plastic cups of wine. We offered our congratulations, then shuffled past to let in other friends who had been waiting on the periphery. Outside, we flagged a taxi. It rumbled across the Brooklyn Bridge, toward a restaurant where others were waiting. The city streaked past, the bridge cables flickering like a delay, or a glitch. ♦

18-years-after-google-images,-the-versace-jungle-print-dress-is-back

Vincenzo Riili

Country Marketing Director, Google Italy

Published Sep 21, 2019

Nearly 20 years ago, a green Versace dress broke the internet, and Google Images was born.

It was February 2000 when Jennifer Lopez wore a jungle print dress, designed by Donatella Versace, to the Grammy Awards. Seemingly overnight it became a fashion legend, as well as the most popular search query Google had seen at the time. 

But back in 2000, search results were still just a list of blue links. When the Search team realized they weren’t able to directly surface the results that people wanted—a picture of Jennifer in the dress—they were inspired to create Google Images.

Yesterday, at Milan Fashion Week, we reunited with Donatella Versace to celebrate nearly two decades since this iconic moment in fashion (and Google) history. We showed off a new, revamped green dress in the print, designed by Donatella Versace and modeled by J.Lo.

J.Lo and Donatella Versace

Google Tilt Brush helped decorate the runway space with digital artwork inspired by the new print.

Tiltbrush - jungle print
Versace Google Assistant

No one predicted that the jungle print dress would have the technological impact that it did—not even J.Lo herself. Eighteen years later, Google Images is used by millions of people every day, not just to look for celebrity style or fashion photos, but to find ideas for redesigning a living room, creating a meal, or embarking on a DIY project. 

Who knows where our next big idea might come from?

discussion:-9-years-ago-sketch-took-the-torch-from-adobe-fireworks,-but…

I’m shocked how quickly Figma was able to overshadow Sketch in every single way that matters for professional collaborative design…

No more Zeplin for comments which means less team confusion, no new workflows, and one less license

A robust prototype system that gives you triggers such as hover that WE ALL NEED, while still keeping the workflows lean and intuitive

More intuitive symbol editing and management

Heirarchal file and project management

Web based UI

Honest pricing model, none of this sudden subscription service BS. It’s a subscription from the start, or not. F U

People other than designers like Figma….unlike Sketch, which I can’t explain, but getting coworkers to covert to Figma was a dream, whereas the only people that I Kno that like sketch are designers. No clue, but it’s real.

I love you and I will miss you, but I’m never coming back. Too slow to answer the needs of designers and way too much emphasis on bug fixes and brand.

meet-this-year’s-doodle-for-google-contest-winner

Jessica Yu

Doodle Team Lead

Published Aug 13, 2019

I’m still not sure if I know what I want to be when I grow up. But by looking at all of the Doodle for Google submissions we have received this year, I’ve learned that kids have a lot more figured out than I do. Around 222,000 students entered this year’s contest and responded to the theme “When I grow up, I hope…”  

Yesterday, one of our guest judges this year, Jimmy Fallon, announced this year’s National Winner, Arantza Peña Popo. She stopped by “The Tonight Show” to chat about her winning Doodle, called “Once you get it, you give back,” which she drew in honor of her mom. “When I grow up, I hope to care for my mom as much as she cared for me my entire life,” she said. “My mom has done so much for me and sacrificed a lot.”

Doodle for Google 2019

Today, millions of people will be able to see Arantza’s Doodle on the biggest “refrigerator door” around: the Google homepage. Additionally, Arantza will receive $30,000 toward a college scholarship and her school, Arabia Mountain High School (where she was recently named valedictorian), will receive a $50,000 technology package. Thank you to Arantza and all of the students who entered this year for sharing your hopes with us. And maybe one day, we grownups will figure out what we want to do when we grow up.