Francisco De Sousa Webber established to apply the principles of semantic supercomputing to text processing

Guy Matthews, Freelance Contributor

March 26, 2021

4 Min Read

Francisco De Sousa Webber established to apply the principles of semantic supercomputing to text processing

The human brain is a remarkable instrument that functions in ways that no computer can replicate. Its workings, though, continue to provide inspiration for those studying AI.

Take the example of, a developer of AI-based natural language understanding (NLU) solutions.

Its unique approach to NLU has the potential to solve many of the challenges related to the processing of vast amounts of unstructured text. applies the discipline it calls ‘semantic supercomputing’ to reallife business issues, using techniques inspired by the neuroscientific background of its CEO and founder, Francisco De Sousa Webber. The company relies on hardware acceleration to better understand streams of natural language content at a massive scale. Its solutions are implemented with several Fortune 500 companies, in contexts including semantic search and contract analytics.

Webber reflected on the origins of this technology: “If we go back to the 1990s, I was working as a researcher in a university hospital, analyzing patient data. I became interested in finding better ways to mine all those notes that doctors collect. It seemed impossible at the time that any way could be found to manage this. In those days, AI was really the realm of crazy scientists. Then came the idea of statistical modeling.”

With breakthroughs in natural language processing, he said, suddenly it became possible to upgrade the pools of data that were piling up, and structure the results to make them navigable and intelligible. “We entered the era of the Google-driven Internet as we know it today. What we have is in many ways a linear descendant of that statistical modeling [approach], with brute force added in the form of better and faster processing. It’s impressive to think what you can do today, especially with language.”

The market for natural language understanding certainly seems to have gone through rapid changes: “As recently as 2015, if you’d talked about automatically going through emails to find where customers were dissatisfied, it would have seemed an impossible task,” Webber reflected.

“NLP and NLU are now a key resource for the global economy, even if there are only a relative handful of companies big enough to provide that resource.”

Semantic supercomputing, he explained, works by offering a method of processing that doesn’t treat data as a sequential stream of items: “A traditional approach would be looking for a file on your computer, and you have to know what the file is called and where it is stored. With semantic supercomputing, you ask the system something like ‘I’d like to know more about the electric car that I read about in a newspaper a few days ago’, and it can go and find that information for you without needing to know where it comes from. It does this at a massively parallel scale, which is why we use FPGA chips.” delivers this functionality as a pre-configured appliance, shielding the user from any software tuning: “You just interact with the system through a browser and attach it to a data source,” Webber said. “This helps turn the power of AI into a business commodity, and offers several orders of magnitude of improved efficiency. You don’t need a thousand CPUs and a month, it’s done in real-time.”

So what makes FPGAs (in Cortical’s case, Xilinx Alveo accelerator cards) suitable for the task at hand? “With a regular chip, there is a bottleneck, everything you want to work with has to go through a tiny link between the processor and the memory,” Webber said. “Instead of a processor, we have smart memory cells. They find what you want in memory by looking for a pattern. That’s how the human brain works, which helps explain why I started with neuroscience as a way to find a more efficient way of doing things.”

One of the top priorities for the AI industry is to make its wares more accessible to business users by moving out of the realm of data scientists and software engineers: “Our products are built in such a way that the subject matter expert is the person sitting in the driving seat,” Webber said. “They use the product to train the AI with no data scientist in between. You can build your own document models, which you own.”

His ultimate goal is to allow other software companies to develop semantic-based business applications for the Cortical platform: “It’s not a cake we want to eat all by ourselves,” Webber concluded. “We want a crowd to come to the table and eat it with us.”

To find out more about FPGAs as a hardware platform for AI, we invite you to download AI Business eBook AI in the data center: Harnessing the power of FPGAs.

About the Author(s)

Guy Matthews

Freelance Contributor

Professional writer on communications topics: Datacenters, fiber, IT & telecom

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like