The 7 minute StartUp Pitch deck

Photo Illustration Dana Lipnicka

Photo Illustration Dana Lipnicka

A StartUp Pitch should be no longer than 7 minutes! Not even 11 minutes! Has to be the magic, infinite number 7! I say infinite because in some cultures, including my own the number 7 conceptualizes somehow an infinity. You might as well relate the number to some myths you might have heard of like th 7 lives of a cat, 700 hundred concubins of king Solomon, 777 as the number of God or as the number of Access Permissions on Unix!

Making a presentation in 7, seemed actually quite a challenge. I sat down with my partner and created 7 slides. When you present something you love, you can wither present it in one sentence (they call it elevator pitch which I dislike the begging cannotation it has), or you will need a time needed for longing-smile to your enthusiasm which might take from a few minutes up to let’s say an evening in the Pub. But as it seems, you can deliver a #pitch in 7 minutes and here is how. This is what I learned in the process of doing a 7 minute presentation with my partner for a new startup!

Rule 1: Forget the technology!

This is really hard if you are betting on a solution which is solved in an alternative technology solution and competing traditional methods. You know the vision, you know the advantages and it all starts with the technology. You expect yourself to discuss technology advantages… Wrong! There are many business or trendy people raising funds out there. They know nothing of technology, someof them cannot even create a team of developers because are so shallow and yet they can raise millions. Technology is not fundamental in a pitch, it is an adjective!

Rule 2: Focus on the problem!

Everyone you will meet coming from the business world, might this be the University Professor who know only theory, up to the banker will give you one and one advice again! Go out and do some market research, ask people for their problems. Know the problems but more important, know how to express them in proper form. The best approach is to be yourself the consumer of your solution. Personally I prefer to solve a problem concerning me. This way I will find motivation to work on the solution. I believe most enthusias young entepreneurs do the same. It is not the neccessary approach though. End-line, investors invest in a solution to a problem which affects a large “market”.

Rule 3: Numbers are all that count

I mentioned Market in Rule 2. If you want to attract the attention of investors, show them the money. Your startup is as good as the potential revenue it can make. Seems like the best you can do in 7 minutes is to convince someone that you are targeting a real problem, affecting a large market. Look at the AirBnB presentations, the only thing you tak with you is the potential revenue they will be making.

Rule 4: Competitive Advantage

This is also critical to show … how you will enter the market. Actually I thought it is about the advantages, but since Rule 1 says no technology talk, then focus on distinct aspects which you can use to show a contrast with compettors from a market perspective. The advantage that technology or your solution will give without too much focus on technology itself.

Rule 5: People are also part of the package

You cannot focus on the technology, but you can “sell” the people! Who your partners are, what makes them extraordinary people to work with, what benefits will your startup from the people in your team is all you have to convince that your solution can solve the problem you are mentioning and can challenge the competitors you are facing.

The rest is cliché.

 

A model?

The idea came to me as one switches on a light, one day  when by chance there fell into my hands an old dusty diagram,  the work of some unknown predecessor of mine. Since a chemist does not think, indeed does not live without models, I idly went about representing them for myself, drawing on paper the long chains of silicon, oxygen, iron and magnesium, with the nickel caught between their links and I did not feel much different from the remote hunter of Altamira who painted an antilope on the rock wall so that the next dayís hunt would be lucky.

Primo Levi, “The Periodic Table”, 1975

 

Is Facebook discriminating FB Pages!?

While doing some routine boredom visits on websites in social media, I realized something peculiar. The company I used to work has a FB page with over 170K followers. While I still have some access on their information channels, I realized that the visibility of a post on their FB page was very low. I checked the latest posts and it was never mora than 1000-4000 visitors and (almost?) never more than that!!

This seemed strange, especially since even my stream of statuses have been hiding their info for some time. I had to test if this makes any sense for other smaller pages as well.

So I decided to compare the reach and attraction of 2 different pages, a big and a small one. Just had to find a smaller page with same profile and see what happens. So I recalled an old friend who is a Radio Moderator and has a FB Profile (there is no Radio moderator without a FB Page as I know). I knew he would have a couple hundred of visitors. All I needed to do was to follow his statuses and launch a similar status for the FB Page with 170K visitors. He had a 10K followers! Great for my test.

So at a certain moment of the day, the Page with 10K followers pushed a status with a poll which definitely would attract attention since it was referring to a hot topic! As we had discussed we launched that same status on the page with 170K visitors and the results was astonishing! The 10K page had double the visibility of the 170K page… This had to be wrong! This virtual world of FB does not respect its own rules!

This probably is not new to many social media experts. I found out that this thing has been going on for some time. I found it totally unacceptable!

People join Facebook with the attention to follow their friends, or the prominent activities (such as FB Pages) which have information relevant for them. Some might say that Facebook and social media is about following people! I have a different perspective, we use Social Media to get new relevant information.
And if we decide that a certain source of information is relevant for us, than this is what we expect to follow. This is the promise where social media’s where born upon! You follow who you are interested! This is the never-written promise that made social media pages popular (and the vanity of our souls of course)!

Yes! I know what some might be thinking! A page which made it to 170K visitors did not made it by posting one status in 3 days! They did so by posting 10-20-30 statuses per day.

But it is up to the followers to decide if these statuses are becoming boring or not. Believe me, we all have removed people who post irrelevant noise on the social sites from our following streams… It is also expected that FB wants to tweak their algorithm to provide visibility to new pages as well. But this changes should not be discriminating… Especially to portals which have a lot of followers. Unless Facebook categorizes these pages as potential buyer for their ad system, and we are in another dirty discussion…

 

Lexical Distance Among the Languages of Europe / Chomsky, Tyshchenko and Albanian

This chart shows the lexical distance — that is, the degree of overall vocabulary divergence — among the major languages of Europe. The original research data for the chart comes from K. Tyshchenko (1999), Metatheory of Linguistics. (Published in Russian.)

 

Was reading this article which was immediately recognized among my friends because it distinguished Albanian as a stand alone language and it is nice to see your language in the map….

Although it is nice to be on the chart… And although the study (which I have not read) definitely has some solid arguments, I would not completely agree with this representation. I find it hard to understand how Albania is closer to Slavic languages then to Romance languages!?!?

Also, Albanian should have a direct connection to Germanic languages and if you have to show roots to Indo-European languages, you have to put them on the map as well. Such studies should be done on basic and very primitive words people use for simple things (like eat, drink, sit, cover etc) which are not part of technological/cultural developments.

First of all, Albania has no connection through Greece with Germanic languages but a direct connection. Let’s look at some primitive words:

German-Albanian similarities:
German Flackern – English translation Flare – with Albanian probable root Flakë, quite a primitive word. (Albanian Flake translates in English as Flame or Blaze as well)
German Stühl – English Chair – > Albanian “Stol” for a pre-chair sitting place.

An Albanian Stol

And this is not all.

Let’s link Albanian to some old “Indo-Language”

Albanian “Ha” (Doubt there is a primitiver word than “Eat”, outside of grammar rules which are influenced by Latin) which means “To Eat” – Bangladeshi/(indian?): Eat: খাও khao

Same word “Pi” as in Drink – is used in Bangladeshi but I am missing the correct spell of it.

 

There is more to link Albanian and Latin, Albanian and Greek (Alb: mendja, Gre: montya, Eng: mind), but due to close borders these are traded words sometime and sometimes show the same root..

 

 

No Sir, I’m a dreamer

” Dr. Kelso: Are you an idiot?
J.D.: No, sir, I’m a dreamer “

- Scrubs

CSS Frameworks

Below is an extensive list of CSS Frameworks that can be used to develop webpages. Although Twitter Bootstrap is one of the most recognized frameworks today, the list includes some very nice projects which are all worth keeping an eye on.

Twitter Bootstrap

Sleek, intuitive, and powerful front-end framework for faster and easier web development.

Responsive: Yes

Website: http://twitter.github.com/bootstrap/

Foundation

The most advanced responsive front-end framework in the world.

Foundation 3 is built with Sass, a powerful CSS preprocessor, which allows us to much more quickly develop Foundation itself and gives you new tools to quickly customize and build on top of Foundation.

Responsive: Yes

Website: http://foundation.zurb.com/

960 Grid System

Simple grid system

The 960 Grid System is an effort to streamline web development workflow by providing commonly used dimensions, based on a width of 960 pixels. There are two variants: 12 and 16 columns, which can be used separately or in tandem.

Responsive: Yes

Website: http://960.gs/

Skeleton

A Beautiful Boilerplate for Responsive, Mobile-Friendly Development.

Skeleton is a small collection of CSS files that can help you rapidly develop sites that look beautiful at any size, be it a 17″ laptop screen or an iPhone.

Responsive: Yes

Website: http://www.getskeleton.com/

99lime HTML KickStart

Ultra–Lean HTML Building Blocks for Rapid Website Production.

HTML KickStart is an ultra–lean set of HTML5, CSS, and jQuery (javascript) files, layouts, and elements designed to give you a headstart and save you 10′s of hours on your next web project.

Responsive: No

Website: http://www.99lime.com/

Kube

CSS-framework for professional developers.

Minimal and enough. Adaptive and responsive. Revolution grid and beautiful typography. No imposed styles and freedom.

Responsive: Yes

Website: http://imperavi.com/kube/

Less Framework

An adaptive CSS grid system.

Less Framework is a CSS grid system for designing adaptive web­sites. It contains 4 layouts and 3 sets of typography presets, all based on a single grid.

Responsive: Yes

Website: http://lessframework.com/

Flaminwork

The tiny front-end framework for lazy developers.

Responsive: No

Website: http://flaminwork.com/

G5 Framework

(X)HTML5, CSS3, PHP & jQuery Front End Framework.

G5 Framework started as a personal project. In an attempt to speed up workflow, reuse the best coding practices & similar coding techniques, the framework serves as a starter file for new websites.

Responsive: No

Website: http://framework.gregbabula.info/

Easy Framework

Your new starting point for every front-end projects!

Easy is a CSS/HTML/JavaScript framework started as a personal project and then grew into something more. The idea behind it is to reduce the amount of time spent on setting up the basic master HTML template by reusing the same coding techniques.

Responsive: No

Website: http://easyframework.com/

Blueprint

Blueprint is a CSS framework, which aims to cut down on your development time. It gives you a solid foundation to build your project on top of, with an easy-to-use grid, sensible typography, useful plugins, and even a stylesheet for printing.

Responsive: No

Website: http://www.blueprintcss.org/

YAML

“Yet Another Multicolumn Layout” (YAML)

YAML is an (X)HTML/CSS framework for creating modern and flexible floated layouts. The structure is extremely versatile in its programming and absolutely accessible for end users.

Responsive: Yes

Website: http://www.yaml.de/

BlueTrip

A full featured and beautiful CSS framework which originally combined the best of Blueprint, Tripoli (hence the name), Hartija, 960.gs, and Elements, but has now found a life of its own.

Responsive: No

Website: http://bluetrip.org/

YUI 2: Grids CSS

The foundational YUI Grids CSS offers four preset page widths, six preset templates, and the ability to stack and nest subdivided regions of two, three, or four columns. The 4kb file provides over 1000 page layout combinations.

Responsive: No

Website: https://developer.yahoo.com/yui/grids/

Elements

Elements is a down to earth CSS framework.

It was built to help designers write CSS faster and more efficient. Elements goes beyond being just a framework, it’s its own project workflow.It has everything you need to complete your project, which makes you and your clients happy.

Responsive: No

Website: http://elements.projectdesigns.org/

52framework

With HTML5 support coming so fast, with the tiniest of hacks we are able to use it today in virtually al browsers. Using HTML5 makes for much cleaner mark up. This framework fully uses all the great advantages of HTML5.

Responsive: No

Website: http://52framework.com/

elastiCSS

A simple css framework to layout web-based interfaces, based on the printed layout techniques of 4 columns but with capabilities to unlimited column combinations. and capacity to make elastic, fixed and liquid layout easily

Responsive: No

Website: http://elasticss.com/

Boilerplate

noun standardized pieces of text for use as clauses in contracts or as part of a computer program.

As one of the original authors of Blueprint CSS I’ve decided to re-factor my ideas into a stripped down framework which provides the bare essentials to begin any project. This framework will be lite and strive not to suggest un-semantic naming conventions. You’re the designer and your craft is important.

Responsive: No

Website: http://code.google.com/p/css-boilerplate/

Emastic

Emastic is a CSS Framework, it’s continuing mission: to explore a strange new world, to seek out new life and new web spaces, to boldly go where no CSS Framework has gone before.

Responsive: No

Website: http://code.google.com/p/emastic/

Malo

Malo is ultra small css library for building web sites.

Malo is ultra small css library for building web sites. It is meant to be structural base for small or medium web sites. Malo derives from it’s bigger brother Emastic CSS Framework.

Responsive: No

Website: http://code.google.com/p/malo/

The Golden Grid

The Golden Grid is a web grid system. It ‘s a product of the search for the perfect modern grid system. It ‘s meant to be a CSS tool for grid based web sites.

Responsive: No

Website: http://code.google.com/p/the-golden-grid/

1kb grid

Other CSS frameworks try to do everything—grid system, style reset, basic typography, form styles. But complex systems are, well, complex. Looking for a simple, lightweight approach that doesn’t require a PhD? Meet The 1KB CSS Grid.

Responsive: No

Website: http://www.1kbgrid.com/

Fluid 960 Grid System

The Fluid 960 Grid System templates have been built upon the work of Nathan Smith and his 960 Grid System using effects from the MooTools and jQuery JavaScript libraries.

Responsive: No

Website: http://www.designinfluences.com/fluid960gs/

Baseline

Baseline is a framework built around the idea of a “real” baseline grid.

Built with typographic standards in mind, Baseline makes it easy to develop a website with a pleasing grid and good typography. Baseline starts with several files to reset the browser’s default behavior, build a basic typographic layout — including style for HTML forms and new HTML 5 elements — and build a simple grid system.

Responsive: No

Website: http://www.baselinecss.com/

Lovely CSS Framework

The Lovely CSS Framework is a simple and straight forward way to easily deploy an XHTML/CSS site.

Based on a simple 960px wide grid system, featuring multiple column layouts, and various pluggable add-ons.

Responsive: No

Website: http://code.google.com/p/lovely-css/

xCSS

Object-Oriented CSS Framework

xCSS bases on CSS and empowers a straightforward and object-oriented workflow when developing complex style cascades. Using xCSS means a dramatic cut down to your development time by: having a intuitive overview of the overall CSS structure, using variables, re-using existing style cascades and many other handy features.

Responsive: No

Website: http://xcss.antpaw.org/

FEM CSS Framework

FEM CSS Framework is a 960px width + 12 column grid system + CSS common styles, to easy and fast develop web layouts. It is based in the 960 Grid System, but with a twist in the philosophy to make it more flexible and faster to play with boxes.

Responsive: No

Website: http://www.frontendmatters.com/projects/fem-css-framework/

Helium

Helium is a framework for rapid prototyping and production-ready development. In many ways it’s similar to both Twitter Bootstrap and ZURB Foundation – in fact, it uses bits of their code. Unlike either of these two frameworks, however, Helium is designed to be much more lightweight and easier to tinker with.

Responsive: Yes

Website: https://github.com/cbrauckmuller/helium

Sidereel Groundwork

A responsive HTML5, CSS and JavaScript framework built with SASS and Compass. There is heavy focus on responsiveness and making a single layout work on different devices.

Responsive: Yes

Website: http://groundwork.sidereel.com/

Gumby

Gumby is a responsive 960 grid CSS framework. The grid lets you lay out pages quickly and easily in a natural, logical way. The framework is packaged with tons of styles and common interface elements to help you quickly put together functional prototypes.

Responsive: Yes

Website: http://gumbyframework.com

 

Credit: https://github.com/usablica

Always space for new findings in science

We rely on prior scientific discovery by researcher’s who are long time gone; or we recognize their findings as true only after their scientific break-through becomes common knowledge.

Accepting a (fresh) research-finding is difficult because it breaks one (or more than one) old perceptions. Finding something new is also difficult for the same reason. It is all about our conservative intelligent foundations and the will to go beyond them.

This is what I believed to be the main reason why it is impossible for simple minds to make breakthrough findings in science. But it is not just he perception, it is something even simpler. It is our nescience of taking granted everything we see, without asking the questions.

Faraday has put this one nicely in one of this letters:
“I was this morning called by a trifling circumstance to notice the peculiar motions of camphor on water; I should not have mentioned the simple circumstance but that I thought the effect was owing to electricity, and I supposed that if you were acquainted with the phenomenon, you would notice it. I conceive, too, that a science may be illustrated by those minute actions and effects, almost as much as by the evident and obvious phenomena. Facts are plentiful enough, but we know not how to class them; many are overlooked because they seem uninteresting: but remember that what led Newton to pursue and discover the law of gravity, and ultimately the laws by which worlds revolve, was–the fall of an apple.”

Dr. Bence Jones, Faraday’s Life and Letters, Vol. I, Pg 25

 

Free and Open Web, please

Remember SOPA? Well it was never a lost fight for the governments (it is not just USA, Spain, France and Italy have already passed similar laws. See A Comparative Look at SOPA and Similar Laws around the Globe for similar laws).The fight is still on and the governments keep on the same habit and just change the name of the initiatives they want to have to increase control over internet!

As I write this message, today on December 3rd, 2012, a meeting is taking place somewhere in a very business city (Dubai) to create a control mechanism for internet. In other words, this means that the governments, want to have more control on every web-page that is put on the net and every other internet activity.  In political terms this is just “regulating” the net!

We are talking on the same net that has been self-organizing in these last 25 years in a perfect way. We all realize that there is a lot to be improved in security of the internet, but I do not believe this can be fixed through bureaucrats! The movement is clearly an attempt to take control of what is our society’s highest success of this century and transforming it into government’s failure with outcome of easier censorship and frustrated freedom. Internet has to be free, it has to be the place where the different-opinion is still welcomed. It has to be the place of the non-mainstream media and where every citizen can shout their frustrations!

I am not sure if we (the internet users) can deny this from happening, but there is a website http://www.freeandopenweb.com where more information can be found and where an open petition is taking place. Probably this is a lost fight, most probably governments will “Regulate” the internet, but if it is wrong. In the worst case, it will be some more money spent on dummy bureaucrats and spies good for nothing.

Symbian – a post mortem – By Mika Raento

This is a post by Mika Raento, mikie@iki.fi, 2012-10-13 published first in a public Google Doc
(You could actually call this a S60 post-mortem since I mostly write about what Nokia did with symbian but UIQ never amounted to much outside SE’s dreams).
This is an extended good-bye piece for Mika Raento’s blog on Symbian. Since this is a post-mortem, it focuses on what went wrong rather than on what went right (first smartphones, extremely good power management, first cameras, some nice hardware). This has also been a long time in writing (since 2010), not very topical today (late 2012) but I think still an interesting historical perspective.

History

Caveat: I wasn’t there. I might be well off some of the points, but I think the approximate train of thought at Nokia I portray is correct.
Nokia chose Symbian in late 1990’s – an eternity ago. They were targeting something like the 9210 Communicator and the 7650 – devices with 8 (4) megabytes of RAM and 4 megabytes of disk (the 9210 did come with an additional 16 megs of disk in the form of an MMC card). At that point in time there weren’t that many options. Linux wasn’t really viable for such specifications. Windows CE was around – but let’s face it, it’s still not a great choice, let alone then. The previous Nokia smartphone (not that the term was widely used yet) was the 9000/9110 Communicator which ran GEOS (of all things), and was probably seen by Nokia as a dead end by then.
What people at Nokia saw in the late 1990s was a Psion 5. Psion 5 was an instantly-on, multitasking, lightning-fast clamshell PDA running its own 32-bit unicode-ready operating system. It had a suite of productivity apps and a devout following. It probably looked very, very good at that point. So Nokia went with the Psion EPOC OS, later named to Symbian. Psion 5 ran EPOC release 5 (R5), the Communicator would run 6.0 and the 7650 6.1.

The technology

Poor third-party application performance

Now let’s go back to the Psion 5. Its performance derived from three major design points: execute-in-place ROM, battery-backed RAM as storage and monochrome/grayscale screen. What do these mean?
Execute-in-place ROM means that there is no ‘loading’ of programs. No reading from disk. No paging needed. Size of the executable doesn’t affect performance (though it would affect the amount of ROM needed). No actual dynamic linking as although there were DLLs (for sharing code between processes) everything the linker normally does would have been done at ROM build time. No relocations. Starting an application meant creating the kernel structures for a process and jumping into the right memory location. The dynamic linker’s performance didn’t really matter. Even virtual memory was only sort-of as a number of processes were loaded into fixed physical memory locations so that switching between them didn’t incur a TLB flush, so switching between the built-in processes was pretty fast (and there’s a lot of switching since Symbian is a micro-kernel operating system, meaning most operating system calls would incur a process switch).
Battery-backed RAM for storage meant that storage was blindingly fast (well, for those times, the memory bandwidth and latency would not be anything to write home about today). No disk seeks, no fetching across buses. For example the Epoc (Symbian) native database access would have been instantaneous for all intents and purposes. Probably the file server was designed around RAM speeds. Storage access patterns, amount of data and number of writes didn’t really matter.
(A smallish) Grayscale screen meant that drawing things on-screen didn’t involve pushing around too many bits. In addition, graphics assets were in blitting-friendly format in ROM so ‘loading’ a bitmap just meant giving the right memory address to the font-and-bitmap-server. The number of bitmaps you used didn’t cost anything (performance-wise – of course it would again cost ROM size).
Now consider instead a third-party application on a 7650. It’s stored in the internal flash. Starting the application requires loading all of the executable code from flash into RAM (demand paging didn’t come until late sw versions of the N95). After that it would need dynamic linking fixups. Bitmaps were now color, and needed loading from disk into memory first. Writing to disk through the Symbian database was slow as molasses.
Things were exacerbated by the fact that all the example code and documentation was written without taking into account the completely changed performance characteristics of the platform. You were encouraged to load all your bitmaps on startup. The platform code for loading a bitmap would open the file, read the bitmap directory and seek to the bitmap in question for each bitmap you loaded, multiplying disk accesses (we had to write our own bitmap loader for Jaiku to get acceptable performance).
And things got worse over time. Nokia gave up on execute-in-place ROM, probably for cost reasons. Resolution and color depth increased, meaning slower loading (and blitting) of bitmaps. Bitmaps were replaced with SVG (scalable vector graphics), which meant that showing a bitmap wasn’t just blitting from a memory location, it meant loading from disk, parsing XML, building an internal representation of the graphic and rendering that into a hi-color bitmap. Lazy-loading by the platform would have been pretty neat.
On early MMC-supporting devices it was pretty easy to lose data. We corrupted a third of a 100 cards in a couple of weeks writing log files to the card when running the Reality Mining experiment. Obviously FAT on a removable flash card on a battery-powered device was a recipe for disaster. To ‘fix’ this Nokia removed most write caching in 3rd ed. Writing to the MMC was now 50 times slower (for small writes) than to the internal memory.
Obviously you could write performant apps for Symbian – there are very good games out there and Google Maps for Mobile is (was?, it’s no longer distributed) pretty cool, performance-wise (although I worked on it the basic architecture was in place before I joined the team and has nothing to do with me). But these got the performance by basically discarding all of the platform components and replacing them with their own (graphics loading, UI components, data storage). Not the immediately obvious answer when approaching a new platform. And the performance of Nokia’s own apps still sucks on my E72.

Shit Java startup performance

S60 phones weren’t actually a horrible J2ME platform (if there are such things as non-horrible J2ME platforms). If you don’t think so, try Opera Mini – it’s nothing short of amazing. But starting a J2ME app takes ages – long enough that it was a deal-breaker for many people.

Non-standard OS, compilers and libraries

Don’t get me wrong. People are willing to learn new things – witness having to use Objective-C on the iPhone. But not being able to use almost any of your existing libraries or skills is different.
Symbian did not come with a reasonable C library. There was one that had been used to port the JVM but it only contained what the JVM needed. This was alleviated later, much later – more on that further down.
You were not meant to have globals in Symbian programs. Before Symbian OS 9 it was almost impossible: applications were not EXEs but DLLs loaded into a platform process, and the toolchain tried to make it impossible to have global data in DLLs – in the name of not having to have a page of writable memory for each process loading the DLL (remember that you could run a process with 4k of stack and 8k of heap – an extra 4k for a DLL was quite a lot).
Symbian APIs were asynchronous, as a rule and you were not meant to use extra threads (again, to save the 4k of stack you’d need for a thread). Some async APIs you could explicitly wait on to mimic synchronous calls but some you couldn’t. Since documentation assumed you weren’t using threads, it didn’t tell you which APIs were thread-safe. Asynchronous APIs meant that all of your nice linear logic needed to be turned into state machines instead (some people like this, witness node.js, twisted – but you can write much simpler async code in languages with dynamic typing and lexical closures).
For a long time the standard Symbian device compiler was GCC 2.9 (from 1998!). You could use ARM’s compiler instead, but that cost several thousand dollars per seat. Emulator builds were done using either Metrowerks CodeWarrior which was also stuck in the 90s when it comes to C++. Visual Studio compilers were a possibility, but support was dropped later.
Until Symbian OS 9 C++ exceptions were not supported. This had been decided by Psion in the mid 90’s as they thought (probably correctly at the time) that exception support was too flaky and too expensive (in code size and run time). So you couldn’t use C++ code that used exceptions or stack unwinding for correctness (like the standard library).
So no decent C library, no globals, no threading, no exceptions, ancient compiler. The chances of getting some useful third-party library or your own non-Symbian C or C++ code to work out-of-the-box were pretty slim and often it was unfeasible to make the necessary changes.

C++

There is something fundamentally broken about building reusable components out of C++. Most attempts are abject failures – some spectacularly so (such as Taligent). There are some exceptions (boost, maybe Qt, maybe MFC) but even they tend to be exper-level only, and definitely backed by more resources and experience than was applied to Symbian.

Testability

It was pretty hard to make Symbian programs, especially user-facing programs, testable.
There was no real unit-testing framework shipped with the platform (which is not that unusual, but it was much harder to get existing frameworks to work well with Symbian). The platform APIs were a shitload of concrete C++ classes – you couldn’t mock them without wrapping them. The Symbian UI was framework-oriented, rather than library-oriented: your code was often called by platform code rather than you calling platform code, so to test your code you needed to mimic the platform call sequences which were brittle and asynchronous (call this method, then let events run for an unspecified time, then call this method). Application startup was done in closed-source platform code so getting UI code to run inside a test framework was black magic. Writing testable code meant doing a lot of up-front work.
Build times were pretty bad. Using Symbian’s own toolchain it was pretty much impossible to get builds under half-a-minute. The build ran a BAT file that ran a perl script that generated recursive makefiles and then ran make. The emulator took longer and longer to start up as the platform grew (the 5th edition emulator took over a minute). It was non-trivial to load new code into a running emulator (it was possible in theory, but for example Nokia’s own IDE keeps DLLs loaded into the debugger so they can’t be replaced). IDEs helped but Nokia eventually phased out all the good IDEs (see ‘IDEs’ below). That sub-second compile-run unittests cycle you so like? Forget about it. I was eventually able to use scons-for-symbian + my own semi-headless test runner in just under 10 seconds on a 4-core 12 GB RAM machine – a machine you wouldn’t have been able to get in mid 2000s when Symbian was still dominant.
Lots of interesting APIs only worked on the device, worked differently on the device, had very different performance on-device or worked differently on different devices. These included internet connectivity, bluetooth (you needed to get a specific no-longer-manufactured USB dongle to get it to work at all), low-level memory management (heap and stack sizes, memory protection), dynamic linking (emulator uses the windows native linker in the end), camera (some emulators emulated the camera, some didn’t, the emulated camera app was different than the one on the phone), platform security (the emulator didn’t care about many things the device would), application installation (there even was a web site called “Why the fuck won’t my SIS file install?” that debugged your SIS file for you), operating system software release (all devices forked the platform produced by the platform team at Nokia and the SDK was another fork), removable media, built-in media, voice and data calls (which never worked in public SDKs). You needed to test your application extensively on multiple devices (there would at any point in time be up to 30 devices on the market). Automatic testing on-device was difficult (for example fully-automatic installation was frowned-upon because of platform security) – we didn’t consider it worth the effort at Google (though Symbian was never highest-priority at Google due to the minuscule amounts of traffic it generated).

IDEs

In the olden days you could write your Symbian code in Visual Studio. Now say what you will of Microsoft, Visual Studio is pretty damn good when it comes to performance and stability. Especially Visual Studio 6 was very, very performant on the machines that we had in early-to-mid 2000s. You pressed F5 and the emulator started up pretty much instantaneously with your newly-compiled code. You could also use CodeWarrior, but that cost more and was pretty bad compared to VS. Since Microsoft and Metrowerks compilers’ LIB files and C runtime aren’t compatible, SDKs came in two flavours: VS and CodeWarrior.
IDE support back then was implemented by a toolchain target that created an IDE project from your Symbian-specific MMP project file. This obviously meant that if you needed to change the project (add a library or a source file) you took a bigger hit as you needed to run the Symbian toolchain and reload the project. The toolchain also didn’t grok dependencies between your components, so those had to be added manually (for VS you could use ready-made dependency-management plugins and for CodeWarrior we rolled our own). If you had client-server code then you needed to hack something together to get the server to build as needed.
At some point Nokia decided that they really couldn’t live with a dependency to Visual Studio (I heard that they didn’t want such a strategic dependency on Microsoft. Hah.). So they dropped Visual Studio support and bought some rights to CodeWarrior so they could use it in perpetuity (and sell – later give it away – it). This made things quite a bit worse. CodeWarrior’s performance was not as good as Visual Studio’s, its debugger was quite a bit worse and in general the IDE leaves a lot to be desired (e.g., it doesn’t treat files as paths – instead it has a set of ‘search directories’ and a set of ‘filenames’ and it looks for the files in the search directories – guess if you can have two files with the same name but in your source tree?). And the CodeWarrior project file creation was broken in several ways.
A side-note: now that Nokia owned the Symbian version of MetroWerks compilers they were also the only ones maintaining them. Which they really couldn’t do – seems they had neither the will nor the skills to do this. There were almost no new versions of the compiler and the changes were absolutely minimal. More on software skills later.
Even Nokia realized that CodeWarrior was not really the future so they decided to hack on Eclipse to make it work with Symbian, calling their fork Carbide. Open-source, you know – the new hotness and strategically safe. Again, however, they lacked the skills to do that well. The first version tried to do the right thing and have the IDE do the builds, but I assume they were never able to get it to work reliably, so later they just let it run the command line toolchain. So build times, rather than being improved by the IDE, got worse. We at Jaiku actually sat down and talked to the heads of the Carbide project at one point after getting very frustrated with it (we ended up using CodeWarrior, that’s how bad Carbide was). Of the two heads we met, neither of those guys was an actual coder. How do you run an IDE development project if you don’t know what an IDE is supposed to do?

Graphics

Although graphics could be seen as part of the Technology, its impact was big enough that it warrants a chapter of its own.
When the 7650 came out, its graphics were a reasonable match for market expectations. It used colors sparingly, had a very small number of UI concepts (lists, grids, soft keys, menus, selection boxes, editors and a couple of others) and was a step up from the dumbphones/featurephones of its day. But sadly, it also pretty much reflected the maximum capabilities of the UI technology that Nokia could create. Later we would get themes (background images and colors that the user could choose), and much, much later fancy transitions.
Nokia didn’t have the experience or expertise to improve on the UI APIs. Whereas the iPhone builds on the world’s longest continuing tradition on the best APIs for UIs (Macintosh, NeXT, OS X, iOS), Nokia had a bunch of UK engineers for the low-level stuff and a bunch of Finnish engineers for the S60 UI layer. They couldn’t write UI libraries and APIs that would allow internal or external developers to step up their game. The worst example of this is looking at the code for each UI element in the 5th edition source: say a button’s click code has three times as much code doing the animation as there is for handling the actual click. Apple’s API separates animation from other UI code and especially simpler animations (moves, fades) are declarative.
In 2007 Nokia brought out the N95, which had 3d acceleration onboard. But they didn’t add 3d acceleration to all their phones, just a few. This meant that, unlike Apple, they couldn’t build their UI capabilities and performance on hardware acceleration.

Browsing

The first S60 phones shipped only with a WAP browser (remember WAP? operators thinking you’d pay-per-click for their walled-garden web?). It was definitely not possible to run a full 2002-web -capable browser in 4MB of RAM, though there were some limited third-party browsers (like Doris).
In (roughly) 2006 Nokia started shipping a WebKit -based browser on the S60 3rd edition phones. They had managed to build an ambitious browser team including (AFAIK) some original KHTML/WebKit engineers. This was a great engineering effort and resulted in a modernish browser (though with many rough edges, you could call it a beta).
Then they lost the team (exhibit 1: http://www.linkedin.com/in/dacarson , exhibit 2: http://www.linkedin.com/pub/antti-koivisto/11/904/5a6 ) and shipped the same beta browser for two years (see symbiatch’s short article and the history of browser versions ).
In 2007 the iPhone jumped way ahead with a browser that could be built with better graphics libraries and HW, much more memory, a much larger screen and a good touch screen. S60 never caught up. The iPhone 1, although shipped in much smaller numbers generated much, much more traffic than all the S60 models combined.

Process and politics

(This is somewhat more speculative as I never worked inside Nokia or Apple, but is based on pretty educated guesses).

Nokia and Symbian

Until Nokia finally bought up Symbian, the OS development would go something like this: Nokia needs feature X (say Bluetooth or Wi-Fi). Symbian says OK: it’s gonna be in release ‘next + 2’, where ‘next’ is what Nokia wants to ship X with. So Nokia builds X themselves and for a few releases they go with their own stuff, then they make the painful transition to the Symbian one. In addition of paying for stuff from Symbian they need to build it themselves + pay Symbian + do a transition. Also: Nokia’s OS skills are not a good as Symbian’s so their version is pretty bad. Also: the transition sometimes breaks third-party apps.
Symbian also made their own UI, called Techview – which was meant to be just a placeholder that licensee’s would replace. Ericsson had UIQ, which Symbian was more closely associated with. Nokia built S60 themselves (and S80 for communicators and S90 for their failed TV-enabled device). All of these had a similar set of underlying APIs, but also significant differences. Nokia needed to build a separate set of applications for each of its own platforms and third-party developers needed to support each UI platform separately, fragmenting their efforts.

Product management and engineering at Nokia

AFAICT Nokia’s engineering was very much top-down. Design prototypes were turned into phones, applications and features. These were broken down to work to be done by teams, and the teams had very little say in whether the stuff they ended up doing made sense.
An example: S60 had a built-in e-mail application that had a feature for checking for new e-mails automatically. It would create an internet connection via GPRS/3G when first requested, and then poll for emails periodically. All implemented nicely by some engineers. However, if the connection was ever dropped, the polling just stopped. Which meant that in reality there was no automatic checking for new e-mails, since you couldn’t rely on it. I find it highly unlikely that the engineers who built this wouldn’t know this, but clearly they had no way of influencing the end result. The same problem remained for several iterations of the platform.
This is in stark contrast to the visible effects of the engineering culture at Apple. There clearly the emphasis is on getting the end result right, whatever teams and layers need to be involved. The way the browser, Wi-Fi and internet connectivity for example know of each other to tell the user why they can’t connect is miles better than S60 (or most Microsoft products).
Nokia seemed to care very little about issues with existing phones – only the next phone mattered. They did release new software versions for phones to deal with the worst bugs that operators complained about, and from the N95 on there would be significant improvements to existing phones (culminating with Anna phones being upgradable to Belle).

Openness and developer relationships

A ‘normal’ third-party developer got an SDK from Nokia. This came with so called ‘public’ APIs. Both Symbian and Nokia also had ‘Partner’ APIs, which were (of course) physically present on the phones but had no headers in the SDKs (and later no LIBs either). These non-public APIs were needed for lots of interesting things, like getting the current cell information for locating the phone. Things would later improve some, but the distinction remained for a long time. It was possible to become a partner. With Symbian it was mostly a question of money, but with Nokia you needed to prove your worth to their partnering organization. If you talked about the non-public APIs in their forums, they would remove the posts. (To be fair, Apple does the same thing and required an NDA just get the SDK).
Nokia didn’t have a formal bug reporting program (to start with) and no public listings of known issues (this, BTW, is pretty much the same with Apple, but in stark contrast to Microsoft). Later they did announce a program, but at least I never got a resolution to any of the bugs I reported. Bugs reported against Carbide or the Metrowerks compiler would sit untouched, then closed when the next version came out even though the issue was not fixed. (I cannot link to the bugs as the bugzilla server no longer exists).
Symbian used to ship source for some parts of Epoc, Nokia stopped even that. The source was often very useful (the same way opendarwin source is sometimes useful :-).
The S60 Platform was originally tested with only Nokia apps, and shipped when only just working with those. Messages would not appear in Inbox if accessed by another app at the wrong time, bluetooth stack was flaky in repeated use, network stack was flaky with repeated use, using too much memory would freeze or reboot the phone. This probably got at least somewhat better over time, as they did start testing also with 3rd party applications.

Conclusions

The S60 platform clearly tells the story of a great hardware company struggling to become (even a good) software company. I think it also tells a story of how hard it is to build expertise in software – that without a crucial mass of people, companies, products and projects in an area (in this case, specifically UI libraries and compilers) you just can’t make. The Bay Area has that mass – Finland doesn’t.
There are shades of the Innovator’s dilemma here: When you have almost 100% market share how do yo know you need to completely recreate your technology? The iPhone started from 128MB of memory, 16GB disk and accelerated graphics – a system built for 4MB/16MB and bitmap graphics could not be scaled to the same user experience (or developer experience). (Jeff Dean has often said that you can design a system for 10x growth, but 100x requires a different system)
Nokia most likely had a cultural/organizational/managerial problem dealing with both of the problems above. They did try though: Meego (and they had a couple of linux-based prototypes even before that), OpenC, Carbide, Qt were all attempts to both make significant improvements in the current offering or to create the next technology. It ended up being too little, too late.

Science today – Simple Scenario of Research Paper use- The Unclaimed Claims

A: That moment, when you read a paper and you know one of the cited authors :)
A: *quote from the paper which is citing here, with a claim*
B: aha! :)
B: cool, which paper?
A: wait, let me give you a link to the PDF
A: *link to paper here*
A: *Title of Paper Here*
B: well, but i was not aware of the fact that in that paper i show *claim here*
A: that is the funn in this world
A: you say something
A: ppl transmit it differently
A: and by the end of the day we are making Science already :)
B: i hope people won’t continue to cite that paper in this way …
A: I am reading your paper now. I did not read it before
A: but the important thing is that your paper inspired a judgement
A: you should be proud, that is the aim of scientific publications
B: you won’t find evidence to that claim. but good method: read cited papers
B: yes, citation count is all we need
A: :) and inspiration of judgement

 

I am not putting any comment here. After all, research papers are being read.

© 2014 Norm Al

Theme by Anders NorenUp ↑