Image for post
Image for post
Oooh, financial data! To say that it’s complicated would be ab understatement. IBM’s new AI can help.

By Petros Zerfos, Xuan-Hong Dang, and Syed Yousaf Shah

Analysts’ reports, corporate earnings, stock prices, interest rates. Financial data isn’t an easy read. And there’s a lot of it.

Typically, teams of human experts go through and make sense of financial data. But as the volume of sources keeps surging, it’s becoming increasingly difficult for any human to read, absorb, understand, correlate, and act on all the available information.

We want to help.

In our recent work, “The Squawk Bot”: Joint Learning of Time Series and Text Data Modalities for Automated Financial Information Filtering, we detail an AI and machine learning mechanism that helps to correlate a large body of text with numerical data series describing financial performance as it evolves over time. Presented at the 2021 International Joint Conferences on Artificial Intelligence Organization (IJCAI), our deep learning-based system pulls from vast amounts of textual data potentially relevant textual descriptions that explain the performance of a financial metric of interest — without the need of human experts or labelled data. …


Image for post
Image for post
Novel coronavirus SARS-CoV-2 (Credit: Creative Commons)

By Dario Gil

On 31 December 2019, the world welcomed a new year unaware that several cases of viral pneumonia of an unknown cause had emerged in the Chinese city of Wuhan. Forty-four people were ill, 11 — severely — and those cases were reported to World Health Organization’s Country Office.

Those few cases of COVID-19 grew to a few hundred and then to a few thousand until that trickle became a flood, spilling out of mainland China and spreading across the globe. Just one year later, we’ve amassed 85 million cases and counting.

We’ve learned a lot during the past year about how to address global crises, but in my mind, one lesson cannot be ignored: The need for more strategic collaborations across institutions and sectors. …


Image for post
Image for post

By Dario Gil

I remember my first IBM patent.

Granted to my IBM colleagues and myself in 2005, it was for a topcoat waterproof material for a photoresist — a light-sensitive substance used to make circuit patterns for semiconductor chips. It was a proud moment for me — especially as I knew that this patent contained novel capabilities that were critical for a brand-new technology called immersion lithography. This technology soon became the basis for how all advanced chips are manufactured, even to this date.

I also knew it had contributed to IBM’s patent leadership that year. Just like during the 13 years before and 15 years after, IBM has been getting more patents granted than any other company in the US. …


Image for post
Image for post
Neurons: real or artificial? Credit: IBM Research

By Abu Sebastian

Ever noticed that annoying lag that sometimes happens during the internet streaming from, say, your favorite football game?

Called latency, this brief delay between a camera capturing an event and the event being shown to viewers is surely annoying during the decisive goal at a World Cup final. But it could be deadly for a passenger of a self-driving car that detects an object on the road ahead and sends images to the cloud for processing. Or a medical application evaluating brain scans after a hemorrhage.

Our team, combined with scientists from the universities of Oxford, Muenster and Exeter as well as from IBM Research has developed a way to dramatically reduce latency in AI systems. We’ve done it using photonic integrated circuits that use light instead of electricity for computing. In a recent Nature paper, we detail our combination of photonic processing with what’s known as the non-von Neumann, in-memory computing paradigm — demonstrating a photonic tensor core that can perform computations with unprecedented, ultra-low latency and compute density. …


Image for post
Image for post

By Stefan Wörner & Will Zeng

When quantum computing meets the world of finance, things are bound to get shaken up.

In a new arXiv preprint “A Threshold for Quantum Advantage in Derivative Pricing”, our quantum research teams at IBM and Goldman Sachs give the first detailed estimate of the quantum computing resources to achieve quantum advantage for derivative pricing — one of the most ubiquitous calculations in finance.

We describe the challenges in previous quantum approaches to derivative pricing, and introduce a new method for overcoming them. The new approach — called the re-parameterization method — combines pre-trained quantum algorithms with approaches from fault-tolerant quantum computing to dramatically reduce the estimated resource requirements for pricing financial derivatives using quantum computers. …


Image for post
Image for post

By Flavio Bergamaschi, Russ Daniel, and Ronen Levy

Over a decade ago, IBM Research sent the world of cryptography abuzz, when our scientists announced a major breakthrough with Fully Homomorphic Encryption (FHE). A mouthful perhaps, but this mathematical concept allows something no other crypto scheme does — to perform arbitrary calculations on encrypted data without decrypting it.

And now we are taking this work to the next level.Our team is now offering a first-of-its-kind security homomorphic encryption services package that provides education, expert support and a prototyping environment for clients, enabling them to start experimenting with FHE.

Researchers first started tinkering with homomorphic encryption in the 1970s, but the real pivotal moment came in 2009. It was then that Craig Gentry, back then an IBMer, now — research fellow at Algorand Foundation, published his seminal work, A Fully Homomorphic Encryption Scheme. Thanks to this work, researchers and companies began to consider FHE for cloud security, from banking and financial services to online shopping and healthcare. At the time, Craig compared it to “one of those boxes with the gloves that are used to handle toxic chemicals… All the manipulation happens inside the box, and the chemicals are never exposed to the outside world.” …


Image for post
Image for post

By Fabiana Fournier

The high-end textile industry is anything but mass manufacturing. Be it cashmere, alpaca, or merino — it’s often custom made and woven into different textures, colors, and products. So how can companies track this supply chain to make sure all products and processes can be traced back to verify their role in the manufacturing process?

Enter blockchain. My team at IBM Research has developed a solution for the textile industry using the IBM Blockchain Transparent Supply (BTS) platform. …


Image for post
Image for post

A team of mathematicians has resolved an issue with the ‘optimal transport’ technique that compares distributions in machine learning — effectively getting rid of the infamous ‘curse of dimensionality’

By Soumyadip Ghosh & Mark Squillante

It all started with military barracks — and math.

In the 1940s during World War II, Russian mathematician Leonid Kantorovich wanted to minimize the time soldiers would spend getting from their barracks to the front line. In science speak, this planning problem of minimizing the distance between two positions is dubbed ‘optimal transport’ and has been tickling mathematicians’ brainwaves for decades. …


Image for post
Image for post
Natural language is… well, natural for us, humans. Not so for computers. But we are getting there.

By Katia Moskvitch

“What’s taking up all that space?!” That’s what you’d probably say to a human to find out what file was eating up all of the space on your hard drive. But dealing with a computer, you’d have to be more precise and say, somewhat boringly: “Display the top result from a list of files sorted in decreasing order of size, displayed in gigabytes / human readable format.”

This is what researchers badly want to change. Getting a machine to ‘understand’ natural language — the way you’d speak to a human — has been a hot area of research for years. So hot in fact that in July 2020 an IBM team led by computer scientists Mayank Agarwal, Tathagata Chakraborti, and Kartik Talamadupula co-organized a competition to improve natural language translation. The specific requirement has been to build an algorithm able to translate an English description of a command line task to its corresponding command line syntax. …


Image for post
Image for post
Artificial intelligence and deep learning may seem secure, but even deep neural networks are vulnerable to hacking

By Katia Moskvitch

Deep learning may have revolutionized AI — boosting progress in computer vision and natural language processing and impacting nearly every industry. But even deep learning isn’t immune to hacking.

Specifically, it’s vulnerable to a curious form of hacking dubbed ‘adversarial examples.’ It’s when a hacker very subtly changes an input in a specific way — such as imperceptibly altering the pixels of an image or the words in a sentence — forcing the deep learning system to catastrophically fail.

AI has to be robust to withstand such attacks — and adversarial robustness also extends to its level of defenses against ‘natural’ adversaries, be it white noise, black-outs, image corruption, text typos or unseen data. While computer vision models are advancing rapidly, it’s possible to make them more robust by exposing them to subtly altered images through adversarial training. But this process is computationally expensive and imperfect; there will always be outlier images that may trip the model up. …

About

Inside IBM Research

This is the official Medium account of IBM Research. It’s managed by IBM Research’s Chief Writer Katia Moskvitch & follows the IBM Social Computing Guidelines.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store