I am a data scientist and a software architect working with ntop.
Between 2014 and 2015, I was responsible for guiding and helping a University of Pisa spinoff company in designing and implementing large-scale processing systems for Intellectual Property (IP) data, including parallel data crawlers, web scrapers, databases and trend/correlation analyzers. Working both independently and as part of a team, I was involved, from the design to the implementation, in all phases of data projects. I designed big data solutions and developed code, scripts and data pipelines that leverage structured and unstructured data integrated from multiple sources, which include stock markets, SEC, USPTO and EPO.
Between 2011 and 2014, I was a researcher at the Institute for Informatics and Telematics (IIT) of the Italian National Research Council (CNR) in Pisa. During the same period I did a PhD in computer engineering at the University of Pisa Dept. of Information Engineering (IET). My main research topics were parallel algorithms and complex network analyses, with special emphasis on the Internet and the stock markets.
I am interested in everything related to data analyses and information processing systems, especially when heterogeneous sources are combined to extract value from raw data. Python is the language I have the most experience with, but I love C and C++ as well.