An Interview with SiSoft Sandra Dev., Adrian Silasi


Our interview

Sandra has been around since the early days of hardware reviews and benchmarking. How (and why) did you get started?

I started way back when my father bought me a Commodore 64; I wrote some programs for it and just progressed from there (music programs as I was studying music at the time – no benchmarks at all).

I went on to study Electronics Engineering (not music to my parents’ disappointment) where we also learned some programming (Eiffel, IBM AS360, etc.) but nothing mainstream (C/C++, Pascal). I just got my first real computer (a 386, bought 2nd hand from University) with Windows 3.1 and bought a set of Turbo Pascal for Windows from a teacher.
I thought: “What’s the best way to learn programming/API?” Write a “do all be all” utility, aka a “system information” app which became “SAW (System Analyst for Windows)” written in Turbo Pascal (!) I released it as freeware to see what other people thought. About a year later, a company wanted to buy it and since I was paying fees (did not have a grant) I sold it to them.

When Windows 95 came out I decided to build a better version from scratch to learn C/C++ and stick to the official SDK & tools. This became “SANDRA” and it just grew from there. It was also released as freeware; when University was over and I could not find a good job someone suggested I make a shareware version, like the great games of the day (e.g. DOOM).


What, in your opinion, sets Sandra apart from the many other hardware benchmark/information suites out there (including AIDA, Passmark, Crystalmark, etc,)?

I think all benchmarks – if fair and valid – are useful for something, I don’t think there is one “end all be all” benchmark – nor there should be! It’s just a question of whether they measure what you are looking for to measure or not.

You left out the 800-pound gorilla in the room, FutureMark: it would be a mistake to have 1 company have the monopoly as with any market. I think everybody mentioned should aim higher and provide real competition.


Are there any Sandra features / capabilities you feel have been largely overlooked?



The program has evolved considerably

Quite a few of them: I am always amazed as to how few of the features get used.

There are benchmarks that measure indexes I did not see in other benchmarks (multi-core transfers efficiency; power management efficiency; GPGPU / APU performance (using the same workloads as the CPU ones and supporting CUDA, OpenCL and Compute Shader); .Net and Java to name a few).

It has an integrated (free) two-way Ranking functionality (not only posts results but also downloads results and performs statistical analysis for score certification) as well as integrated Pricing functionality for more details (pricing, pictures, specifications).


How would you respond to critics who claim Sandra is just a series of synthetic tests (and thereby inferior to "real world" benchmarks?


I think both synthetic and “real world” benchmarks have their uses and neither should be ignored. Synthetics are very useful to “drill-down” and find out the reason for performance issues/gains as they are designed to measure specific performance indexes.


What are your plans for the future of the program? What test suites (if any) do you plan to add in the future?



Sandra's 'Favorites' section is an easy way to access the modules you use most often

That depends entirely on users and what they are interested in; I always try to listen to everybody and if something makes sense I do it. Many of the features in Sandra have originated this way. I’m just as eager to find out what the future holds.

Related content