By Rafael Bloom, Partner, Strategy, Innovation & Governance @ Digital Works Group
Introducing Rafael Bloom
Rafael Bloom is a Partner at Digital Works Group, specializing in strategy, innovation, and governance. With deep expertise in finance and emerging technologies, he brings sharp insight into how digital infrastructure and data systems shape industries like high-frequency trading, finance and AI.
The Ubiquitous Connectivity Question
Let’s start with a basic and hopefully uncontroversial assumption: Today, we are accustomed to connected technology being part of almost everything we do. Now let’s make a further assumption, that we have also become accustomed to variability in the quality, speed, and reliability of our data connections.
Most of the time it really doesn’t matter – on video conference calls we say things like: ‘Sorry, you dropped out for a second, can you say that again?’.
Sometimes we have to load up a web page for a second time, or the movie we are watching stutters for a couple of seconds. Mildly irritating, sure – but tolerable.
But what if those virtually unnoticeable variances and delays in data connectivity were the key to earning – or missing out on – millions of dollars?
High Frequency Trading gets its edge by eliminating as much latency and drop-out as possible.
The Price of Failure
If a painstakingly designed High Frequency Trading (HFT) strategy failed to execute properly because of a momentary glitch, it would betray the millions of dollars spent on everything that led to it.
From the research, the top-quality quants and analysts, salespeople, software, IT equipment, network infrastructure and – most importantly – the strategy, lack of execution ability would result in failure to make money for clients.
Small connectivity dropouts don’t matter in most use cases – with video calls it pretty much goes with the territory – but when they do matter, it’s a serious business to get right every time.
This is why data centers and connectivity costs so much money in the first place. But in an environment such as HFT, getting it right means investing in very specific, specialized pieces of equipment, and co-locating it as close as possible to an Exchange’s trade matching system as physically possible within the Exchange’s own data center.
In Trading, Bandwidth & Latency Run the Show
HFT is distinct from regular electronic trading in a few key ways. Regular trades are also carried out electronically, an order being received (e.g. buy / sell 100,000 shares of an exchange-listed stock) and the order being fulfilled by traders via some form of exchange connectivity and a trade matching engine.
Equally, High Frequency Trading will take place on these same exchanges and the same financial instruments are being bought and sold.
But with HFT, the tech itself enables fascinating and profitable possibilities. HFT trading strategies might leverage tiny discrepancies in market valuations (aka ‘arbitrage’) to take advantage of momentary opportunities, or act according to an algorithmic trading strategy in order to minimize market impact and achieve a better overall price.
In either scenario, as well as in other HFT strategies, thousands of trades are sent into the exchange and executed before other, slower, market players are able to make their moves.
HFT Network Locations
Colocation is - of course - important, but once in the Exchange’s Data Center, it is vital to install the right kind of equipment to get the job done as planned every single time. The demands are high performance has to be best-in-class where data transmission speeds, bandwidth, reliability are concerned.
Even a single short-run cable could be a bottleneck if it cannot cope with the deluge of data. This is why fiber and optical connectivity are the clear choice for the key elements of a High Frequency Trading setup.
But there are also physical factors to cope within a data center environment such as temperature, where fiber is a much better choice than copper wire. This is where the technology becomes even more fascinating in its capabilities and possibilities, even outside of the world of HFT where others also seek speed supremacy.
Making Waves
Optical connections running through fiber are capable of immense data throughput – but getting the most performance possible takes even more – in the case of optical transceivers, they take advantage of the laws of physics themselves.
There are differences in how a given color (i.e. frequency) of light performs over a given optical cable connection. The piece of kit responsible for this, as well as other bandwidth-enhancing techniques like multiplexing, is an optical transceiver.
Multiplexing uses different light color / frequencies (known as Wavelength Division Multiplexing) to carry multiple signals over a single fiber. Compared with copper wires, specialized optical transceivers and fiber connections deliver an order of magnitude faster, more reliable connections, as well as staying cooler.
Of course, traditional data center networking OEMs provide this kind of equipment, but where the HFT firms are finding their edge is by selecting truly specialized components, from optical transceivers to Direct Attach Cables (DACs).
ProLabs is such a provider, with its range of products geared towards the specialized needs of HFT.
Choosing OEM vs ProLabs
The big ‘OEM’ names are where they are for good reason, and major elements of an organization’s tech footprint will always use some big names.
Nevertheless, HFT is one of the most cutting edge and profitable data connectivity use cases, where every component has to perform to its full capability with complete reliability.
Yet, whereas batch-level testing suffices for the OEMs, ProLabs thoroughly test every single component, giving valuable peace of mind in such a mission-critical role in HFT data infrastructure.
Coming Soon to Your Industry?
Just as these technical capabilities have created new possibilities for financial services, they will do so in many emerging technologies and industry verticals.
Reliability and speed of decision making are just as important in autonomous vehicles, in telemedicine, or in the dominant topic of the day, Artificial Intelligence.
The way data moves around in AI computation as opposed to the traditional computational paradigm, places greater demands on data centers than ever before.
Just as GPUs proved to be better suited for AI tasks than CPUs, they have led to new designs to carry out AI computation for data center connectivity.
AI computational tasks involve particularly large quantities of data being moved around, so specialized fiber setups are now being seen in other verticals where AI is part of the solution.
When it comes to deciding which option to choose, consider that ProLabs transceivers are 100% compatible with OEM equipment, and access to fiber experts.
For banking infrastructure, transceivers from ProLabs offer a compelling cost-benefit proposition. They provide significant cost savings compared to OEMs without sacrificing performance, reliability, or compatibility.
Eager to learn more? Download our recent white paper or drop us a line directly with your questions.