These schools graduate the most funded startup CEOs

There is no degree required to be a CEO of a venture-backed company. But it likely helps to alumnu from Harvard, Stanford or one of about a dozen other prominent universities that churn out a high number of top startup executives.

That is the central conclusion from our latest graduation season data crunch. For this exercise, Crunchbase News took a look at top U.S. university affiliations for CEOs of startups that created$ 1 million or more in the past year.

In many ways, the findings weren’t too different from what we unearthed almost a year ago, looking at the university backgrounds of funded startup founders. However, there were a few spins. Here are some key findings 😛 TAGEND

Harvard fares better in its rivalry with Stanford when it comes to training future CEOs than founders. The two universities essentially tied for first place in the CEO alum ranking.( Stanford was well ahead for founders .)

Business schools are big. While MBA programs may be seeing fewer applicants, different degrees remains quite popular among startup CEOs. At Harvard and the University of Pennsylvania, more than half of the CEOs on our listing graduated as business school alum.

University affiliation is influential but not determinative for CEOs. The 20 schools featured on our listing graduated CEOs of more than 800 global startups that created$ 1M or more in approximately the past year, a minority of the total.
Below, we flesh out the conclusions contained in more detail.

Where startup CEOs went to school

First, let’s start with school rankings. There aren’t many big amazes here. Harvard and Stanford far outpace any other institutions on the CEO list. Each counts close to 150 known alum among chief executives of startups that created$ 1 million or more over the past year.

MIT, University of Pennsylvania, and Columbia round out the five largest. Ivy League schools and big research universities constitute most of the remaining institutions on our list of about twenty with a strong track record for graduating CEOs. The numbers are to be laid down in the chart below 😛 TAGEND

Traditional MBA popular with startup CEOs

Yes, Bill Gates and Mark Zuckerberg fell out of Harvard. And Steve Jobs trenched college after a semester. But they are the exceptions in CEO-land.

The typical route for the leader of a venture-backed company is somewhat more staid. Degrees from prestigious universities abound. And MBA degrees, particularly from top-ranked programs, are a pretty popular credential.

Top business schools enroll only a small percentage of students at their respective universities. However, these institutions make a disproportionately large share of CEOs. Wharton School of Business degrees, for example, accounted for the majority of CEO alumnus from the University of Pennsylvania. Harvard Business School also graduated more than half of the Harvard-affiliated CEOs. And at Northwestern’s Kellogg School of Management, the share was nearly half.

CEO graduates background is truly quite varied

While the educational backgrounds of startup CEOs do present a lot of overlap, there is also plenty of room for variance. About 3,000 U.S. startups and nearly 5,000 global startups with listed CEOs created$ 1 million or more since last May. In both cases, those startups were largely led by people who didn’t attend a school on the list above.

Admittedly, the math for this is a bit fuzzy. A big chunk of CEO profiles in Crunchbase( probably more than a third) don’t include colleges and universities affiliation. Even taking this into account, however, it looks like more than half of the U.S. CEOs were not graduates of schools on the short list. Meanwhile, for non-U.S. CEOs, only a small number attended a school on the list.

So, with that, some words of inspiration for graduates: If your goal is to be a money startup CEO, the surest route is likely to launch a startup. Degrees matter, but they’re not determinative.

Make sure to visit:


90s kids rejoice! Microsoft releases the original Windows 3.0 File Manager source code

Microsoft has released the source code for the original, 1990 s-era File Manager that is so familiar to all of us who were dragging and dropping on Windows 3.0. The code, which is available on Github for the purposes of the MIT OSS license, will compile under Windows 10.

File Manager uses the multiple-document interface or MDI to display multiple folders inside one window. This interface style, which changed drastically with later versions of Windows, was the standard for almost a decade of Windows releases.

These little gifts to the open source community are definitely fun but not everyone is happy. One Hacker News reader noted that” Most of the MSFT open source stuff is either litter or completely unmaintained. Only a couple of high profile projects are maintained and they jam opt-out telemetry in if you like it or not( despite hundreds of commentaries requesting them to go away ). Even Scott Hanselman getting involved in one of our tickets got it nowhere. Same strong arming and neglect for customers .”

Ultimately these “gifts” to users are definitely a lot of fun and a great example of nostalgia-ware. Let me know how yours compiles by Tweeting me at @johnbiggs. I’d love to see it running again.

Make sure to visit:

MITs new headset reads the words in your head

There’s always been a glaring issue with voice computing: Speak to a voice assistant with other people around attains you feel like a bit of a weirdo. It’s a big part of the reason we’ve been watching the technology start to take off in the home, where people feeling a little less self-conscious talking to their machines.

The advent of some sort of nonverbal device that gets the job done in a similar way, but without the talking, is a kind of inevitability. A team at MIT has been working on simply such a device, though the hardware design, admittedly, doesn’t go too far toward removing that whole self-consciousness bit from the equation.

AlterEgo is a headmounted — or, more properly, jaw-mounted — device that’s capable of reading neuromuscular signals through built-in electrodes. The hardware, as MIT puts it, is capable of reading “words in your head.”

“The motivation for this was to build an IA device — an intelligence-augmentation device, ” grad student Arnav Kapur said in a release tied to the news. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition? ”

The school tested the device on 10 topics, who basically developed the product to read their own neurophysiology. Once calibrated, the research team tells it was able to get about 92 percentage accuracy for commands — which, honestly, doesn’t seem too far off from the accuracy of voice commands for the assistants I’ve used.

The potential for such a device is quite clear from a consumer standpoint — once you get past the creepiness of the whole reading terms in your head bit. And the fact that it looks like a piece of medieval orthodontic equipment. The team also added bone conduction for audio playback to keep the system fully silent, an element that could potentially make it useful for special ops.

Make sure to visit:

Feature Labs launches out of MIT to accelerate the development of machine learning algorithms

Feature Labs, a startup with roots in research begun at MIT, officially launched today with a situate of tools to help data scientists construct machine learning algorithms more quickly.

Co-founder and CEO Max Kanter says the company has developed a style to automate “feature engineering, ” which is often a day consuming and manual process for data scientists. “Feature Labs helps companies identify, implement, and most importantly deploy impactful machine learning products, ” Kanter told TechCrunch.

He added, “Feature Labs is unique because we automate feature engineering, which is the process of using domain knowledge to extract new variables from raw data that induce machine learning algorithms work.”

The company attains this by using a process called “Deep Feature Synthesis, ” which create features from raw relational and transactional datasets such as visits to the website or abandoned shopping cart items and automatically converts that into a predictive signal, Kanter explained.

He says this is vastly different from current human-driven process, which is time-consuming and mistake prone. Automated feature engineering enables data scientists to create the same various kinds of variables they would come up with on their own, but much faster without having to expend so much time on the underlying plumbing. “By dedicating data scientists this automated process, they can expend more time figuring out what they need to predict, ” he said.

Photo: Feature Labs

It achieved that in a couple of ways. First of all, it has developed an open source framework called Featuretools, which provide a style for developers to get started with the Feature Labs toolset. Kanter says that they can use these tools to build small projects and get comfy use the algorithms. “The goal of this initiative is to share our vision by giving developers the chance to experiment with automated feature engineering on new machine learning problems, ” he wrote in a blog post announcing the company launch.

Once a company wants to move beyond experimentation to scale a project, however, they would need to buy the company’s commercial product, which they are offering as a cloud service or on-prem answer, depending on the customers requirements. Early clients include BBVA Bank, Kohl’s, NASA and DARPA.

The company also announced a seed fund round of $1.5 million, which in fact closed last March. The round was led by Flybridge Capital Partners with participation from First Star Ventures and 122 West Ventures.

Feature Labs products have their roots in research by Kanter and his co-founders Kalyan Veeramachaneni and Ben Schreck at MIT’s Computer Science and AI Lab at MIT, also known as CSAIL. The idea for the company began to kind in 2015 and over the past couple of years, they have been refining the products through their work with early clients, which has led to today’s launch.

Make sure to visit:

Algorithmic zoning could be the answer to cheaper housing and more equitable cities

Zoning codes are a century old, and the lifeblood of all major U.S. cities( except arguably Houston ), deciding what can be built where and what activities can take place in a neighborhood. Yet as their intricacy has risen, academics are increasingly investigating whether their rule-based systems for rationalizing urban space could be replaced with dynamic systems based on blockchains, machine learning algorithm, and spatial data, potentially revolutionizing urban development and development for the next one hundred years.

These visions of the future were inspired by my recent chats with Kent Larson and John Clippinger, a dynamic urban believing duo “whos been” induced improving cities and urban governance their current career focus. Larson is a principal research scientist at the MIT Media Lab, where he directs the City Science Group, and Clippinger is a visiting researcher at the Human Dynamics Lab( also part of the Media Lab ), as well as the founder of non-profit ID3.

One of the most severe challenges facing major U.S. cities is the price of housing, which has skyrocketed over the past few decades, placing incredible strain on the budget of young and old, singles and families alike. The average one-bedroom apartment is $3,400 in San Francisco, and $3,350 in New York City, inducing these meccas of innovation increasingly out-of-reach of even well-funded startup founders let alone artists or educators.

Housing is not enough to satiate the modern knowledge economy worker though. There is an expectation that any neighborhood is going to have a laundry list of amenities, from nice and cheap restaurants, open spaces, and cultural institutions to critical human services like grocery stores, dry cleaners, and hair salons.

Today, a zoning board would just try to demand that various developments include the necessary amenities as part of the permitting process, leading to food deserts and the curious soullessness of some urban neighborhoods. In Larson and Clippinger’s world though, rules-based models would be thrown out for “dynamic, self-regulating systems” based around what might agnostically be called tokens.

Every neighborhood is made up of different types of people with different life goals. Larson explained that “We can model these different scenarios of who we want working here, and what kind of amenities we want, then that can be delineated mathematically as algorithm, and the incentives can be dynamic based on real-time data feeds.”

The idea is to first take datasets like mobility hours, unit economics, amenities scores, and health outcomes, among many others and feed that into a machine learning model that is trying to maximize local resident happiness. Tokens would then has become a currency to provide signals to the market of what things should be added to the community or removed to improve happiness.

A luxury apartment developer might have to pay tokens, especially if the building didn’t offer any critical amenities, while another developer who converts their property to open space might be completely subsidized by tokens that had been previously paid into the system. “You don’t “re going to have to” breakdown the signals into a single price mechanism, ” Clippinger said. Instead, with “feedback loops-the-loops, you know that there are dynamic scopes you are trying to keep.”

Compare that systems-based approach to the complexity we have today. As architectural and urban planning savors have changed and developers have discovered loopholes, city councils have updated the codes, and then updated the updates. New York City’s official zoning book is now 4, 257 pages long( advising: 83 MB PDF file ), the phase of whose purpose is to rationalize what a beautiful, functional city should look like. That complexity has bred a massive influence and lobbying industry as well as startups like Envelope which try to make sense of it all.

A systems-based approach would throw out the rules while still trying positive objective outcomes. Larson and Clippinger want to go one step further though and incorporate tokens into everything in a local neighborhood economy, including the acquisition of an apartment itself. In such a model, “you have a participation right, ” Clippinger said. So for instance, a local public school teacher or a popular baker might have access to an apartment unit in a neighborhood without paying the same amount as a banker who doesn’t engage as much with neighbors.

“Wouldn’t it be great to create an alternative where instead of optimizing for financial benefits, we could optimize for social benefits, and cultural benefits, and environmental benefits, ” Larson said. Pro-social behavior could be rewarded through the token system, ensuring that the people who made a neighborhood vibrant could remain part of it, while also offering newcomers a chance to get involved. Those tokens could also potentially be fungible across cities, so a participation right token to New York City might also give you access to neighborhoods in Europe or Asia.

Implementation of these sorts of systems is certainly not going to be easy. A few years ago on TechCrunch, Kim-Mai Cutler wrote a deeply-researched analysis of the complexity of these issues, including the permitting process, environmental reviews, community supporting and opponent, as well as the basic economics of construction that construct housing and development one of the most intractable policy problems for municipal leaders.

That said, at least some cities have been aroused to trial parts of these algorithmic-based models for urban development, including Barcelona and several Korean cities according to the two researchers. At the heart of all of these experiments though is a belief that the old models are no longer sufficient for the needs of today’s citizens. “This is a radically different vision … it’s post-smart cities, ” Clippinger said.

Make sure to visit:

IBM and MIT pen 10-year, $240M AI research partnership

IBM and MIT came together today to sign a 10 -year, $240 million economic partnership agreements that establishes the MIT-IBM Watson AI Lab at the prestigious Cambridge, MA academic institution.

The lab will be co-chaired by Dario Gil, IBM Research VP of AI and Anantha P. Chandrakasan, dean of MIT’s School of Engineering.

Big Blue intends to invest $240 million into the lab where IBM researchers and MIT students and faculty will work side by side to conduct advanced AI research. As to what happens to the IP that the partnership produces, the sides were a little bit murky about that.

This much we know: MIT plans to publish newspapers related to the research, while the two parties plan to open source a good part of the code. Some of the IP will end up inside IBM products and services. MIT hopes to generate some AI-based startups as part of the bargain too.

“The core mission of joint lab is to bring together MIT scientists and IBM[ researchers] to shape the future of AI and push the frontiers of science, ” IBM’s Gil told TechCrunch.

To that objective, the two parties plan to put out requests to IBM scientists and the MIT student community to submit notions for joint research. To narrow the focus of what could be a broad endeavor, they have established a number of principles to guide the research.

This includes developing AI algorithms with goal of get beyond specific applications for neural-based deep learning networks and find more generalized ways to solve complex problems in the enterprise.

Secondly, they hope to harness the power of machine learning with quantum computing, an area that IBM is working hard to develop right now. There is tremendous potential for AI to drive the development of quantum computing and conversely for quantum computing and the calculating power it brings to drive the development of AI.

With IBM’s Watson Security and Healthcare divisions situated right down the street from MIT in Kendall Square, the two parties have agreed to concentrate on these two industry verticals in their work. Eventually, the two teams plan to work together to help understand the social and economic impact of AI in society, which as we have ensure has already proven to be considerable.

While this is a big deal for both MIT and IBM, Chandrakasan made clear that the lab is but one piece of a broader campus-wide AI initiative. Still, the two sides hope the new partnership will eventually yield a number of research and commercial breakthroughs that will lead to new industries both inside IBM and in the Massachusetts startup community, particularly in the healthcare and cybersecurity areas.

Make sure to visit:

Escher Reality is building the backend for cross-platform mobile AR

The potential of mobile augmented reality is clear. Last summer Pokemon Go gave a glimpse of just how big this craze could be, as thousands of aroused humans converged on parks, bus stops and other locations around the world to chase virtual ogres through the lens of their smartphones.

Apple was also watching. And the summer months the company signaled its own conviction in the technology by announcing ARKit: a developer toolkit to support iOS developers to build augmented reality apps. CEO Tim Cook said iOS will become the worlds biggest augmented reality platform once iOS 11 hits consumers devices in fall underlining Cupertinos expectation that big things are coming down the mobile AR pipe.

Y Combinator-backed, MIT spin-out Escher Realitys notion in the social power of mobile AR predates both sets of trigger phases. Its constructing a cross-platform toolkit and custom backend for mobile AR developers, aiming to lower the barrier to entry to building compelling experiences, as the co-founders set it.

Keep in mind this was before Pokemon Go, says CEO Ross Finman, discussing how he and CTO Diana Hu founded the company approximately one year and a half ago, initially as a bit of a side project before going all in full hour last November. Everyone thought we were crazy at that time, and now this summer its the summer for mobile augmented reality ARKit has been the best thing ever for us.

But if Apple has ARKit, and you can bet Google will be coming out with an Android equivalent in the not-too-distant future, where exactly does Escher Reality come in?

Think of us more as the backend for augmented reality, says Finman. What we offer is the cross-platform, multiuser and persistent experiences so those are three things that Apple and ARKit doesnt do. So if you want to do any type of shared AR experience you need to connect the two different devices together so then thats what we offer Theres a lot of computer vision problems associated with that.

Think about their own problems of what ARKit doesnt offer you, adds Hu. If youve insured a lot of the present demos outside, theyre okay-ish, you can see 3D models there, but when you start thinking longer term what does it take to create obligating AR experiences? And part of that is a lot of the tooling and a lot of the SDK are not there to provide that functionality. Because as game developers or app developers they dont want to think about all that low level stuff and theres a lot of genuinely complex techs going on that we have built.

If you think about in the future, as AR becomes a bigger movement, as the next computing platform, it will need a backend to support a lot of the networking, it will need a lot of the tools that were building in order to build compelling AR experiences.

We will be offering Android support for now, but then we imagine Google will probably come out with something like that in the future, adds Finman, couching that part of the business as the free bit in freemium and one theyre hence more than happy to hand off to Google when the time comes.

The team has put together a demo to illustrate the kinds of mobile AR gaming experiences theyre aiming to support in which two people play the same mobile AR game, each employing their own device as a paddle

What youre looking at here is very low latency, custom computer vision network protocols enabling two players to share augmented reality at the same hour, as Hu explains it.

Sketching another scenario the tech could enable, Finman tells it could support a version of Pokemon Go in which friends could battle each other at the same hour and see their Pokemons opposed in real day. Or allow players to locate a Gym at a very specific location that stimulates sense in the real-world.

In essence, the teams bet is that mobile AR especially mobile AR gaming gets a whole lot more interesting with is supportive of richly interactive and multiplayer apps that the project works cross-platform and cross device. So theyre build tools and a backend to supporting developers wanting to build apps that can connect Android users and iPhone owners in the same augmented play space.

After all, Apple especially isnt incentivized to help support AR collaboration on Android. Which leaves room for a neutral third party to help bridge platform and hardware gaps and smooth AR play for every mobile gamer.

The core tech is basically knitting different SLAM maps and network connections together in an efficient way, tells Finman, i.e. without the latency that would make a game unplayable, so that it runs in real-time and is a consistent experience. So tuning everything up for mobile processors.

We go down to , not just even the network layer, but even to the general assembly level so that we can run some of the execution instructions very efficiently and some of the image processing on the GPU for telephones, tells Hu. So on a high level it is a SLAM system, but the exact method and how we engineered it is novel for efficient mobile devices.

Consider ARKit as step one, were steps two and three, adds Finman. You can do multi-user experiences, but then you can also do persistent experiences once you turn off the app, once “youre starting” it up again, all the objects that “youve left” will be in the same location.

Consider ARKit as step one, were steps two and three .

People can collaborate in AR experiences at the same hour, adds Hu. Thats one main thing that we can really offer, that Google or Apple wouldnt provide.

Hardware wise, their system supports premium smartphones from the last three years. Although, looking ahead, they say they see no reason why they wouldnt expand to supporting additional types of hardware such as headsets when/ if those start gaining traction too.

In mobile theres a billion devices out there that can run augmented reality right now , notes Finman. Apple has one part of the market, Android has a larger part. Thats where youre go to the most adoption by developers in the short term.

Escher Reality was founded approximately one year and a half ago, spun out of MIT and initially bootstrapped in Finmans dorm room first as a bit of a side project, before they went all in full time in November. The co-founders go back a decade or so as friends, and say they had often kicked around startup ideas and been interested in augmented reality.

Finman describes the business theyve objective up co-founding as actually just a nice combination of both of our backgrounds. For me I was working on my PhD at MIT in 3D perception its the same type of technology underneath, he tells TechCrunch.

Ive been in industry running a lot of different squads in computer vision and data science, adds Hu. So a lot of experience bringing research into production and house large scale data systems with low latency.

They now have five people working full time on the startup, and two part period. At this point the SDK is being used by a limited number of developers, with a wait-list for new sign ups. Theyre aiming to open up to all up-and-comers in fall.

Were targeting games studios to begin with, tells Finman. The technology can be used across many different industries but were going after gaming first because they are usually at the cutting edge of new technology and adoption, and then theres a whole bunch of really smart developers that are going after interesting new projects.

One of the reasons why augmented reality is considered so much bigger, the shared experiences in the real world really opens up a whole lot of new capabilities and interactions and experiences that are going to improve the current guess of augmented reality. But truly it opens up the door for so many different possibilities, he adds.

Discussing some of the compelling experiences the team see coming down the mobile AR pipe, he points to three regions he reckons the technology can especially support namely: instruction, visualization and entertainment.

When you have to look at a piece of paper and imagine whats in the real world for building anything, get direction, having distance professions, thats all going to need shared augmented reality experiences, he suggests.

Although, in the nearer word, customer entertainment( and specifically gaming) is the teams first bet for traction.

In the amusement space in the consumer side, youre go to short films so beyond just Snapchat, its kind of real time special effects, that you are able to video and set up your own various kinds of movie scene, he suggests.

Designing games in AR does also present developers with new conceptual and design challenges, of course, which in turn bring additional development challenges and the toolkit is being designed to help with those challenges.

If you think about augmented reality theres two new mechanics that you can work with; one is the position of the phone now matters , notes Finman. The second thing is the real world become content. So like the map data, the real world, can be integrated into the game. So this really is two mechanics that didnt exist in any other medium before.

From a developer standpoint, one added constraint with augmented reality is because it depends on the real world its difficult to debug so weve developed tools so that you can play back logs. So then you can actually run through videos that were in the real world and interact with it in a simulated environment.

Discussing some of the ideas and clever mechanics theyre seeing early developers playing with, he indicates color as one interesting area. Guessing about the real world as content is really fascinating, he tells. Think about color as a resource. So then you can mine color from the real world. So if you want more gold, put up more Post-It notes.

The business model for Escher Realitys SDK is usage based, entailing they will charge developers for usage on a sliding scale that reflects the success of their applications. Its also offered as a Unity plug-in so the specific objectives developers can easily integrate into current dev environments.

Its a very similar model to Unity, which encourages a very healthy indie developer ecosystem where theyre not charging any fund until you actually start making money, tells Hu. So developers can start working on it and during developing hour they dont get charged anything, even when they launch it, if they dont have that many users they dont get charged, its only when they start making money we also start making money so in that sense a lot of the incentives align pretty well.

The startup, which is graduating YC in the summer 2017 batch and now headed towards demo day, will be looking to raise funding so that they are able to amp up their bandwidth to subsistence more developers. Once theyve got additional outside investment procured the plan is to sign on and work with as many gaming studios as is practicable, tells Finman, as well as be head down on constructing the product.

The AR space is just exploding at the moment so we need to make sure we can move fast enough to keep up with it, he adds.

Make sure to visit:

Does Silicon Valley have a moral responsibility to stop developing robots?

MIT CSAIL research offers a fully automated way to peer inside neural nets

MITs Computer Science and Artificial Intelligence Lab has devised a route to look inside neural network and shed some light on how theyre actually making decisions. The new process is a fully automated version of the organizations of the system the research squad behind it presented two years ago, which applied human reviewers to achieve the same ends.

Coming up with a method that can provide similar outcomes without human review could be a significant step towards helping us understand why neural networks that perform well are able do succeed as well as they do. Current deep learning techniques leave a lot of questions around how systems actually arrive at their results the networks hire successive layers of signal processing to categorize objects, translate text, or perform other functions, but we have very little means of gaining insight into how each layer of the network is doing its actual decision-making.

The MIT CSAIL teams system use doctored neural nets that report back the strength with which every individual node responds to a given input image, and those images that produce the strongest answer are then investigated. This analysis was originally performed by Mechanical Turk workers, who would catalogue each based on specific visual conceptions found in the images, but now that work has been automated, so that the category is machine-generated.

Already, the research is providing interesting insight into how neural nets operate, for example showing that a network trained to add color to black and white images aims up concentrating a significant portion of its nodes to identifying textures in the pictures. It also found that networks trained to identify objects in video dedicated many of their nodes to scene identification, while networks trained to identify scenes do exactly the opposite, committing many nodes to ID-ing objects.

Because we dont fully understand how humans suppose, categorize and distinguish information, either, and neural nets are based on hypothetical models of human thought, the research from this CSAIL team could eventually shed light on questions in neuroscience, too. The paper will be presented at this years Computer Vision and Pattern Recognition conference, and should provoke plenty of interest from the artificial intelligence research community.

Make sure to visit: