Viewing entries in
Uncategorized

Choosing Simple and Good or Complex and Best

Comment

Choosing Simple and Good or Complex and Best

Owning and using technology is very much like deciding what tools to have in your garage. There are people that will choose to buy the 64-piece wrench set in standard and metric sizes and others that will simply buy one crescent wrench and call it good. Technology has the same choices. You can buy many individual things to meet needs or look to converged technology platforms as an alternative approach.

Comment

Read More
It’s Time for Enterprise Storage to Switch Technology Foundations

Comment

It’s Time for Enterprise Storage to Switch Technology Foundations

Thirty years ago the technology world moved in two distinct architectural directions for resiliency; enterprise computing and networking/internet proceeded toward a loosely coupled resiliency model whereas storage focused on a tightly coupled model.

Both were viable options and, at the time, the directional choices were logical. Today, however, enterprise storage has become the weak link in modern IT largely because these architectural tenants have not been challenged. In fact, the storage attributes that drove the use of tightly coupled architectures no longer exist and I believe that enterprise storage should move to a loosely coupled foundation.

Comment

Read More
Death by 1000 Acquisitions

Comment

Death by 1000 Acquisitions

We all know and understand the “death by 1000 cuts” metaphor; Andy (our COO) recently coined a new and more modern metaphor, “death by 1000 acquisitions.”  Making the wrong acquisitions is more risky and potentially more deadly when companies are dealing with disruptive market transitions.  Leading of existing technology companies should pay careful attention to not make this mistake.

Comment

Read More
Open Letter to Open Stack

Comment

Open Letter to Open Stack

Dear OpenStack,

Two years ago, you were all the rage. Everyone in tech was buzzing about you. And nearly every venture capitalist I spoke with wanted to know how we worked with you.

What Happened?

Comment

Read More

Comment

Failure

In telling the story of Formation to some folks in the media last week, I was reminded by long-time storage veterans (thanks Joe Kovar – see article here ) that I have had two previous attempts to create a more ubiquitous data virtualization layer. The results were… well… not great. OK, miserable failure might be more appropriate. The first was a product called VersaStor which started at Compaq and then migrated to HP. My second attempt was a product called Invista while I was at EMC.

Third time a charm? I am heartened by the Henry Ford quote that “Failure is simply the opportunity to begin again, this time more intelligently.”

I did learn many things from these “failures” but four lessons stand out.

1. To innovate you must look forward not backward.

The design focus of both previous products was towards accommodating existing (legacy) storage arrays and not on building the best forward-looking solution. Bad.

2. It is nearly impossible to create transformational technology inside of a company that will be financially impacted (negatively) by the disruption.

I am sure that most of you get this but if you don’t give the Innovator’s Dilemma by Clayton Christensen a shot.

3. Building a large team is not the way to go.

Small teams are key to innovation success, even on “large” projects.

4. Everyone involved needs to believe!

I can’t emphasize this enough. If you want to do something disruptive then everyone from the engineers to the CEO (and especially the CEO) must “believe.” If the project must succeed then it will. If it is a “pet project” or an “alternative” then it will most certainly fail.

While there certainly are many other learning’s, both organizational and technical, these four stand out as the most significant.

And they are embodied in our third beginning. And I believe this time is definitely a charm.

 

Comment

Read More

Comment

Stealth Mode

One of my favorite terms for startups is “stealth mode.” It is just so mysterious to say, “we are in stealth mode right now…” It is essentially a way for us geeks to add intrigue and mystery to technology.  The basic process for stealth mode seems to be: 1) put up a web site and 2) put on the site that you are in stealth mode and can’t say anything. It makes me wonder, if I really wanted something to be secret, why put up the web site in the first place?  The answer is simple; it’s just no fun to have a secret when no one knows you have it.

I am reminded of the movie line “the first rule of Fight Club is we don’t talk about Fight Club.” In the era of so much clutter and information overload, withholding information can create more interest than providing it. Just look at the lengths that people go to try to figure out what Apple is doing next.

The time has come for my current company, Formation Data Systems, to exit our “stealth mode” phase. We never were really that “stealth” in the first place; we have been speaking with lots of analysts, potential customers and advisors all throughout our development process. Still, until today, we actually never disclosed the level of investment in the company or the fact that we have been in Alpha testing with customers for several months and the fact will deliver Beta software by the end of the year. 

We are super-excited about the potential but, alas, as we are not Apple, we are not going to get notoriety though silence.

Comment

Read More

Comment

To converge or not to converge? That is today’s IT question

Owning and using technology is very much like deciding what tools to have in your garage. There are people that will choose to buy the 64-piece wrench set in standard and metric sizes and others that will simply buy one crescent wrench and call it good. Technology has the same choices. You can buy many individual things to meet needs or look to converged technology platforms as an alternative approach.

There is no universal right answer but I believe there is a very specific decision process that should be used when evaluating different solutions. It comes down to 3 rules.

Rule 1:  Assume you will use the simplest, most cost efficient “converged” solution and then see if there are any use cases where it simply WILL NOT work.

Converged solutions will tend to be “good” at everything and “great” at nothing. In each individual case, there will likely be specialty products that would come out on top.  The reason to not buy a specialty solution is generally that they are more costly, more complex, and less flexible.  Consequently you will always wind up with multiple solutions.

The best example is your smartphone. Today, most of us choose to carry one device with us and use it as a phone, camera, GPS, music/video player and browser. There are definitely better single-function devices in every category yet most of us still prefer the convenience, cost-effectiveness and simplicity of a single device.

Rule 2: Don’t use a variable cost model. 

Let’s say you already own a digital camera and are looking for a new phone. Should you buy a smartphone even though you don’t need that camera? In most cases, the answer is yes because it will offer you an overall improvement for minimal incremental cost. While not strictly essential, it will easily pay for itself over time.

If you use a strict variable cost model, you wind up never innovating and only buying the next point product to meet each single need.

Rule 3: It is OK to do both.

I have both an iPhone and a fancy Canon 22 megapixel camera. I probably take 10 times more pictures with the iPhone (because it is convenient) but there are still times when I want the best possible picture and will sacrifice convenience for specialized function.

Converged platforms are never going to be able to cover 100% of use cases and there will always be needs for specialized solutions but it is often better to still use converged platforms where possible and also use specialized products.

The compelling reasons to look at converged products are simplicity, convenience and cost. I believe that, within IT, converged solutions are about to take center-stage just as smartphones did 5 years ago.  Convergence comes in many forms; there are cloud-based SaaS solutions that provide “convergence” by aggregating customers onto a single platform and solutions that integrate server, network and storage infrastructure.

With our new company, Formation Data Systems, our focus is converging data storage resources like flash, disk and even cloud storage into a single virtual environment that can be used to meet all types of application data storage needs. In this way, a common set of physical resources can be shared to deliver block, file and object formats with varying performance and reliability metrics. 

 

 

Comment

Read More

Comment

Status Reports are Bullsh*t

As we grow our company the question of how to insure good communication becomes a relevant topic and, inevitably, the idea of instigating status reports raises its ugly head.

I hate status reports. I believe they are one of the biggest time-wasters in business.  Needless to say, we have banned them at FDS and I think you ought to as well. Let me explain.

The organized sharing of important information is critical for good business.  Technology today, unfortunately, gives us the ability to create much more information than we can consume, thus, success is now much more about WHAT information should be shared and how.

The classic status report (where a person writes down what they did in prose) is like watching a TV infomercial. Infomercials attempt to look like unbiased “information sharing” but they are, of course, just well-scripted commercials. Individuals often like status reports because they can create their own “commercial.” Good news can be embellished and bad news downplayed, if not ignored completely. Some project stats looking bad? The answer is simple. Don’t include those stats in your report!

The classic corporate annual report is a mixed bag. The part that most will review is the numbers. The numbers represent facts and express data that can be compared across peers and with prior performance. I equate this data to the score of a football game. While the score does not reflect the nuances of how you played – it shows clearly if you won or lost.

The remaining prose in annual reports is pretty much marketing fluff. It is not that the information is incorrect; it is just not “balanced.”  The waves of legislation requiring more disclosure have not helped clarity at all. Companies now just go to the opposite extreme and list every possible risk factor imaginable making it impossible to understand the legitimate business risks.

Having trashed the status report let me provide a few alternatives that are much more effective for managing large teams.

Dashboards: Dashboards are great when they are based on hard data and metrics.  A good dashboard drives metric-based goals and provides useful insights.

Email/Phone/Text: If there is an issue or a problem, it should be addressed immediately.  Pick your favorite method…

1x1’s: Managers should have 1x1’s to probe and get insight by asking the hard questions.

Stand-up meetings: Quick meetings to share status are a great way to communicate current status effectively to the group.

Wiki’s and Group Chat: These are great for real time activities, building specifications and issue discussions. Everyone can quickly stay updated on current status and issues.

It is easy to create massive amounts of content and overwhelm people with information. How many you have actually read the disclosure statements sent out by your bank or investment funds? How many of you read the terms and conditions for a piece of software? Or do you just click accept? I believe that more data actually causes us to be less diligent and more willing to just click “accept.”

The information overload that has evolved via the 2 L’s (Litigation and Legislation) is something we have to live with in contracts and proxy statements but this form of high-volume communication should not be allowed to permeate your organization. 

Our goal is to build an information flow that will provide the greatest insight with the least amount of volume.  With simple and clear dashboards, there are no excuses for managers that claim to be in the dark or overwhelmed by too much data.

Comment

Read More

Comment

Will the success of public clouds make enterprise IT go the way of the dinosaur? It depends on if they adapt.

Making Enterprise IT a competitive weapon ‘again’ for your business.

Some are starting to question the continued existence of Enterprise IT. I am hearing an increasing number of people say that, for infrastructure and application platform delivery, the era of Enterprise IT operating private datacenters, is over.  AWS and Azure have transformational models that have already won; now it’s just a matter of time. Sure, they say, Enterprise IT will be around for a long time but will simply exist to support legacy systems and provide a token oversight to the public cloud infrastructure providers. The game is over. Amazon and its ilk win.

Hold on. While I agree that Amazon and Microsoft have created very innovative offerings this is far from the last inning.  If Enterprise IT fails to address disruptive change then I guess “failure” becomes a very real possibility. If, however, IT can embrace and extend the same concepts that are delivering these improvements within web datacenters, then Enterprise IT can, and will, reestablish itself as a key competitive differentiator for business.

The new world of modern applications looks nothing like today’s client server world.  Modern next-generation computing drives a fundamental infrastructure change on a level unseen since the transition from mainframe to client-server computing.  So, unlike most of the advancements over the past 20 years, adapting requires a radically different approach to achieve success.

The New Rules:

Therefore, if you’re invested in Enterprise IT, and you believe significant disruption is at hand, and you want to leverage it (vs. being consumed by it), these ten recommendations will maximize your success. These recommendations include technical requirements, business practices, general advice and encompass key elements in every area.

    1.   Demand Incompatibility

A natural course of action with any new technology is to embrace/ extend legacy stuff. In fact, most major vendors attempt to define this as “critical”. Don’t fall for it. If the change is disruptive than plan to create two worlds (new and old) instead of trying to merge them.  The strength of disruptive technology is almost always maximized by focus.

As client server computing evolved from mainframes, most IT organizations simply created two environments and then put new applications into the most appropriate environment creating only high-level interactions between the two. This maximized both environments.

Simply stated, putting something new on top of something old does not make the old stuff better.

    2.    Solve for the 90% of the target

In terms of this new disruption (I will call it hyper-scale computing but it involves many elements) Item 1 denotes the need to build a new environment. When specifying this environment there will be a desire to maximize flexibility driving maximum use.  In this case, building an optimized cloud means optimized around the mainstream needs and excludes the corner cases. A big reason for this disruption is the use of a more common commodity infrastructure.  For this new technology, keeping it simple is critical.

It is likely that many businesses will have applications that require extreme performance (or other capabilities). Consider leaving those applications in the “custom” category; exclude them from the scope of work.

     3.   It must be “Scale Out”

On the technical front, several elements are foundational in hyper-scale computing so insure all new elements have a “scale-out” architecture (vs. scale-up).  Scale-out is foundational and a requirement for all system elements.

    4.   Require (EaaS) Everything as a Service

Regardless of whether you deploy network capabilities, data services, security services or other platform elements every resource or function should be projected within the system as a service. Each service should be multi-tenant, include user-provisioning capabilities, and provide use/cost accounting capabilities.

     5.  Require Control

While most assume that QOS is a requirement of any multitenant services architecture, it is so critical that I’m calling special attention to it. Hyper-scale computing is all about being about to share a massive amount of resources across a massive number of applications. Good control is paramount.

     6.   Require a “Dynamic” environment

It goes without saying that this massive environment needs to not only be robust, it must also be modifiable in real-time. Pay attention to any system or service that requires downtime. 

    7.   Merge development, test and production infrastructure

If you look at AWS, it does not distinguish customers or application types. You simply log in, select the services/ service levels and proceed.  The service themselves are robust enough to not let individual application faults propagate through the system.

By merging your development, test, and production environments gains will be made in several ways. A single environment is more flexible; applications releasing to production will not require migration and data environments are more easily snapshotted for test environments.

     8.   Require only one type of “Open” - Simple Open API’s

There is more hype on the topic of open and “lock in” than probably any other topic in tech. Folks obsess about wanting/not wanting to use open source and the need for software to use certain protocols. I would argue that only a couple of things matter. First, insure your services have open (and preferably used elsewhere) APIs. Second, look for services API’s that easily plug into cloud frameworks.

This affords the customer with maximum flexibility and the supplier maximum opportunity to provide value.  Unless you are planning to write code internally, demanding open source is not going to provide any “open” benefit and might diminish differentiation.

    9.   Require Analytics

The old adage that you can’t manage what you can’t measure is very true here. Taking it to the next step… you can’t improve what you can’t analyze. With web-scale systems analytics are critical.

    10.   Wait for it

When we started our new company, Formation Data Systems (which is building a new converged data  platform as a service), we were surprised by how few startups were working on solutions that really address the overall competitiveness of IT in Data Management. In fact, one industry CTO told me he’d reviewed 110 storage startups and Formation Data Systems was the only one he had seen to date. Over half were doing the same thing - some variant of flash in an array…

The point is that Enterprise IT industry is just beginning to realize that there is a very, very large need. True discontinuity is coming.  This must be the curse of leveraging client server computing for so long.  Most people in IT have never experienced a sea change like this.

Momentum is now building. It is clear to me that, with the right technology and a willingness to break with legacy technology, Enterprise IT can again become a competitive weapon for today’s businesses. 

Comment

Read More

Comment

Traffic Part 2: The Virtual Campus

In a previous blog I teed up a significant problem that, I believe, all companies in the Bay Area will eventually have to deal with. As a fast growing startup in Silicon Valley we must recruit  a diverse set of people and talent. But traffic congestion and radically increased commute times are crushing our ability to attract people from across the Bay Area.

This presents an enormous conflict. We clearly do not want our team members spending 3+ hours every day in busses, trains, and cars; we want our team members to be able to work hard but also have time to recharge. And, I have not seen that working from home full-time provides the level of collaboration and engagement that a growing, thriving startup needs. Bay Area companies must address the fact that horrible and escalating commute times are limiting the talent pool. We simply can’t all work from a single location and recruit “locally.” I live only 13 miles from our HQ and it still takes me 45-60 minutes each way to commute.

So, unless scientists quickly invent the transporter beam or the CA government figures out how to fix the traffic problem (my money is on the transporter beam), the concept of having a single campus with a workforce from across the Bay Area is dying fast.

We are attempting to address this problem by creating Virtual Campuses.

This is not as simple as setting up a good video conferencing system. Our goal was to look at all the elements involved in team collaboration in a development environment. We established our plan to meet the following goals:

-       The entire Bay Area development team must be in a single location one day per week

-       New site options will be investigated if 5 or more core team members would significantly benefit. This means a  > 60 min (one way) commute time improvement

-       All of our tools must be used effectively including our agile (carding) process across distance

-       Collaboration must be seamless

-       Team members should be able to meet face-to-face in under an hour if needed

On the technology side, we have found a good selection of cloud-based tools for agile development and collaboration.  The difficulty is that these tools are typically designed to be used individually via a browser. But we’re adapting the tools so that sites have group monitors showing sprint status and card boards. Each site has video collaboration and all sites participate in the daily stand up meeting 

To facilitate the ability to meet face-to-face for weekly meet ups (or as needed), our policy is that all sites be located within walking distance to a BART station.  Our HQ is at Fremont BART; our first outpost is near the Powell Street station in SF; our second will be near Pleasanton/Dublin BART.  The three locations provide team members with reasonable commute times from the entire South Bay (it is a reverse commute to Fremont), Palo Alto, greater San Francisco, and the East Bay.  With close proximity to BART, team members can hop from one site to another in about an hour. And no transportation is needed from BART to our offices. It is all walkable.

We are hoping that this combination of technology, location selection, and meet-up schedules will help us adapt to the new traffic realities, enrich our recruitment efforts, reduce the stress for our team members, and be positive for both diversity and the environment while not compromising our team effectiveness. At this point, it is too early to know. We will track our progress and I will blog more in the future about how it is working out.

Comment

Read More

Comment

Software Defined Marketing

 

One of the hottest new terms in tech these days is “software defined.” It is all over many tech companies marketing literature. A Google search for “software defined” yields 61,900,00 results. This must be a major trend!

But what is it?

As we are building a new data management platform I thought I better check and see if we could claim that we were “software defined storage” (those pesky marketing people always want to know things like this…) so I decided to head to the source -  Wikipedia. Here is the definition.

Software-defined storage (SDS) is a term for computer data storage technologies which separate storage hardware from the software that manages the storage infrastructure. The software enabling a software-defined storage environment provides policy management for feature options such as deduplication, replication, thin provisioning, snapshots and backup.

First question, if the new era is software defined was the last era was “hardware defined?”  OK, I got it, the software is separate. That is good news because we do that. Before I tell the marketing folks to go for it I decide to verify by checking Google again. This time I look at the “sponsored links” and the first two come up as Tintri and Nutanix. I looked through the offerings of both companies and, as far as I can tell, neither sell any software. Both appear to only offer specialized HW appliances.

Maybe Wikipedia is wrong or maybe I just need understand the concept of  “separate.” I dug deeper and found this on TechTarget

Software-defined storage (SDS) is an approach to data storage in which the programming that controls storage-related tasks is decoupled from the physical storage hardware. 

Ahhh…  decoupeled! Not separate. That sounds better. So the software doesn’t have to be separate, just decoupled.  But wait a minute; this moniker could be applied to half or more of the storage and data management products in the last 20 years. This is hardly new.

Maybe I should go back to Wikipeda and read the details.  What I found is that SDS can be claimed if ANY any of the following are true:

-       Storage Virtualization

-       Parrallel NFS

-       Any OpenStack Storage stuff (including old storage arrays connected to OpenStack)

-       Storage Automation and Policy Management

-       Scale-out storage

-       You can fog a mirror (sorry I threw that one in)

If I read this correctly. A 20 year-old RAID array using a piece of policy management SW is SDS. A basic iSCSI array connected to Cinder is SDS. A totally custom flash array that offers SLA-based management it SDS.

 So – my conclusion is that Storage Defined Storage is:

 

 

 

 

Anything you want to be…

 

 

 

 

 

 

 

 

 

 

 

Comment

Read More

Comment

Traffic

Traffic

In many places in the country, people talk about the distance between two places in weird foreign terms like “miles.” I might ask a person “How far is it to Denver?” and would get an answer like “About 100 miles.”  I guess that works in some places because you can make general assumptions such as ‘I can go 60 miles per hour on freeways and 30 miles per hour on other roads...’

For us here in the Bay Area, the traffic is so ridiculous and the roads so crazy that we never talk about distance in term of miles; that is a useless data point. We tell you how far apart things are in terms of time. If you ask me how far it is from Pleasanton to Palo Alto the first thing I will ask you is “When do you want to go?”

This week, that trip took me 1 hour and 48 minutes. There were no accidents or major closures; just slogging through traffic. Most of the route is on a freeway.  The trip is only 30 miles yet it feels like 90.  My average speed – 16 MPH!

So, aside from being a bitch-session about traffic, there are very serious ramifications for Silicon Valley companies. The simple fact is that many Bay Area commute times have doubled over the past few years. This means that we are spending much more time, money and energy getting to work. It means less time for both family and work. It means less money for fun; increased environmental impact; more stress and many other bad things.

Call me cynical, but I don’t expect government to jump in and provide a solution. We are all going to have to deal.

This is a big problem for our new company, Formation Data Systems. We are looking to more than triple the size of our team this year.  We want to attract the best people and require diverse backgrounds. We also have a need for a wide variety of technical skill sets.  In order to be successful we must recruit from the across the Bay Area.

Increased commuting times dramatically affect the potential candidate pool for hiring. If you have a company built around a single “campus” location and the acceptable commute “distance” is cut in half; your candidate pool could be reduced by 75% or more!

 Not a good thing.

Getting the best people into a company is already tough and long commute times for a new job are a non-starter for many candidates, especially given the number of opportunities.

Like many, I believe that teams (excluding sales and local support) can be most effective when they collaborate from a central location. It speeds communication, makes interactions easier and provides more flexibility. I also believe that group office environments foster an energy that doesn’t exist when everyone works from home.  It became clear fairly early on, however, that even as a startup we had a problem with a single-campus strategy.  We could not expect to attract people people from San Jose, San Francisco and Danville to a single location. Even if people are willing to spend 3-4 hours per day in a car, long commute times violate our core company cultural value that team members have ample recharge time with their friends and families.

With all of these needs in mind, we have attempted to come up with an innovative solution and that lets people work closer to home and enables most of benefits of a campus setting.

We are revolutionizing the industry; why not revolutionize how we work. Our company is continuing to explore and experiment with these possibilities. I look forward to sharing our updates on my future blogs.

Comment

Read More

Comment

The Oracle and Google API fight – The ramifications are more profound than you may think.

Last week the Federal Appeals Court made a shocking decision in the Oracle vs. Google API fight. The court decided that Oracle could copyright its APIs for Java. This effectively makes it impossible for others to build fully-compatible alternative software components.

This is huge and, in my opinion, a very very bad thing.

We attach the word “open” to a lot of things in technology but I believe there are certain areas where true openness is critically important and APIs are at the top of my list.

“Open Source” is often portrayed as the Holy Grail of openness in technology and yet I don’t think open Source is much of a factor at all.  To be clear, the open source movement itself has revolutionized the software industry.  Open source components are now the building blocks of most technology company’s offerings and have dramatically reduced the cost of developing new software.  That said, it is generally not feasible for individual IT organizations to try to directly use Open Source components for major functions; it is simply too costly to support.  It would be like me trying to build my own smartphone with open source. Sure, it’s possible, but the cost of integration and support would never be worth it.

This is why there are so many companies building products and services around open source components. Companies like Red Hat and Cloudera exist because it is simply too hard for individual companies to build and support their own software stack.  

I don’t think that the “open source-ness” of a product should be a major purchase consideration for an individual business. It is like demanding the source code to the TV set you just bought so you can have more control. For all but a few this is meaningless.

Open APIs on the other hand are absolutely critical for business consumers and I believe they should be at the top of every IT checklist. Open APIs are what fuel innovation and make vigorous competition possible.  APIs are todays equivalent of the standardized protocols of 20 years ago. Without open APIs, companies and users will become trapped and innovation will slow.

While I really don’t care if the code running in my TV is open source, I do care a great deal that the API’s (the external interfaces – e.g. HDMI, Remote IR codes, audio, power) used by my TV are completely open. This simple fact is that, if I am unhappy with my TV, I am going to go out and buy a new one, and not try to reprogram my existing TV.  The thing most likely to “trap” me into the same brand, however, is not the source code; it is the API’s.

I am a strong believer in the ability to patent true innovation in software but API’s do not fall into that category. They are simply the interaction method across a boundary layer. Innovation accelerates when companies offer API compatibility with popular APIs. Since the federal government has chosen (at least for now) to allow proprietary APIs, consumers must now require open APIs as a condition of purchase.  Open APIs, I believe, are the critical factor for “open-ness” in software. 

Comment

Read More

Comment

Piled Higher and Deeper

With EMC world this week, there was the usual plethora of storage announcements from both EMC as well as others trying to co-opt the news cycle.

One thing struck me in particular – the complexity of it all… It seems like every new product won’t actually replace anything, you ‘must simply’ add it onto the mountain of stuff you already own.

I especially loved those presentation slides that portray how you could take all of your old gear and then buy more stuff and add some new flash stuff and new data management stuff and then “federate” your new and old stuff -- all to get to a simple nirvana. This is usually labeled “software defined.”

Of course this has nothing to do with simple. It does have everything to do with trying to maintain a large profitable revenue stream using micro-specialized hardware and software. To me it is like trying to make a building taller without changing the foundation. It works great until it collapses.

The reason public clouds like AWS and so many SaaS services are crushing it is simple; they don’t start with all the legacy stuff. Simplicity, efficiency and cost effectiveness will never be born from complex “federations.”

Take Apple for example; when they built the iPhone, they basically included free iPod functionality. They knew this move would cannibalize the iPod, but it was better for them to do this, then to have someone else come in and do it to them.  Clearly, this methodology has not been practiced in the storage industry (by large companies or even startups) for a long time.

If Apple were to build products like we see in the storage industry, we would all be wearing very large belts to carry our twenty different devices.  So, when you hear nice sounding words like “federation,” “legacy compatible,” and “software defined,” ask the question – “Is this solution really better, simpler, and less expensive?” 

Comment

Read More

Comment

Inktank Acquisition

RedHat acquired Inktank today for $175M. Kudos to Red Hat for having the insight to see the potential significance of this new class of storage software. 

With the values in tech these days the $175M price tag may not seem like a big deal.

But - it is very significant.

First understand that Inktank is not a software company (they help companies deploy an open source product called Ceph). While I am sure they posses key skills and knowledge, I doubt they have a significant patent portfolio. I don't believe they sell any software. They derive revenue by helping customers deploy software the is freely available.

In that light you might say - why did Red Hat pay so much? 

I figure they paid up for the same reason that we decided to start Formation. A fundamental change is about to happen in applications and infrastructure that will require a new approach to managing data.

While many will argue that the plethora of "software defined" initiatives at both big and startup companies are also trying to do similar things, this is not true. 

The simple way I like to think of it is that most "software defined" products seek to create simplified front-end experience by abstracting the old, complex crap they still want to sell in the backend. Conversely, Ceph offers a new way to store data that is simple on the front-end simply uses commodity HW as the backend.

You might call both solution types "software defined" yet they are as different as night and day. One solution hides complexity and one eliminates it. One solution reduces cost in a big way while the other will require that customers spend more with the hope is reducing "soft" costs over time.

Ceph is innovative because it seeks to both reduce backend complexity while also providing more of a services-based front end connection.

The innovation in storage and data management is about to make a leap that has not seen in the last 25 years. The cost and complexity will change as radically as when the first network strange arrays were introduced.

There are many products calling themselves “software defined” that only seek to mask legacy complexity. The real game changer comes from building a new model for a data platform.

Comment

Read More

Comment

The Startup Experience: Lesson 5 - The most important question is: Why?

When building a business plan for your startup, you will begin to ask yourself several questions like: who is on the team, what are you building, what is the potential market, or how much investment will be required?

However, I think the most important question to ask is: Why?

Most new companies will initially form around a new product idea; this is natural. 

With a brilliant idea in hand, it often doesn’t seem necessary to take a step back and think about “why.” In fact, many would argue if you already have an excellent plan, do you really need a high-level strategy?

Yes!

Even if you think you have it all figured out, I encourage you to take the time to understand why your company exists – what is your overarching reason for being?

The reason is simple. It’s because your initial plan will probably be wrong and you will need to modify, pivot and maybe even reset your plan. Understanding “why” is critical in determining both when and how to change.

When thinking of the “why,” visualize it as your company’s mission and the direction it will take you.  As you shape and grow your company, constructing the understanding of “why” can be transformational. 

Employees that recognize not just what you want to build, but why you are building it, will be able make better trade-offs, react faster to external shifts, and have an overall improved agility.

Conversely, if you have a comprehensive product plan, but the team lacks a common understanding of the mission, you risk the team failing to be able to make the right trade-offs and decisions.

It’s a crucial first step to take the time and ask yourself “why” and be able to understand your company’s mission; the value to this knowledge is instrumental.

Comment

Read More