Tech Stack and Portfolio

We are a team of young people, organised in an agile way to manage every aspect of digital innovation.
We are both a start-up researching and developing proprietary and open source products and a multidisciplinary software house in IT consultancy.
We like to try, study, explore and try harder when we fail.


  • TypeScript: we like types
  • React: SPA
  • Next.js: Static and SSR projects
  • Angular: complex and scalable SPA
  • Svelte: efficient SPA
  • Astro: Static projects
  • Qwik: SSR projects


  • Flutter: multiplatform development
  • React Native: multiplatform development


  • NodeJS (TypeScript): microservices, APIs, webSockets
  • Go: high performance and concurrent applications, web services and APIs
  • Rust: safety (and performance) critical applications
  • Python: general purpose scripting
  • Scala: solid distributed systems
  • WebSockets: real-time user experiences
  • Redis: caching, pub/sub, event bus and geospatial indexing
  • MongoDB: NoSQL database, geospatial indexing
  • PostgreSQL: SQL database


  • NGINX: reverse proxy
  • Docker: software containerization and portability
  • Terraform: infrastructure as code (IaC)
  • Grafana: observability dashboard
  • Prometheus: monitoring, metrics and telemetry

Cloud services

  • Cloudflare Workers: serverless functions
  • Cloudflare Pages: hosting
  • AWS Amplify: hosting
  • Vercel: hosting
  • Firebase: cloud storage and authentication
  • Stripe: payments management


  • S3: object storage
  • ECS: clustered containerized applications
  • EKS: kubernetes on AWS
  • EC2: general purpose virtual machines
  • Lambda: serverless deploy
  • CloudFront: CDN
  • CloudWatch: metrics, logs and analysis
  • ElastiCache: managed Redis deploy
  • RDS: SQL database deploy
  • DynamoDB: NoSQL database deploy
  • SQS: message queues for event driven applications
  • Cognito: authentication
  • AppSync: GraphQL WebSockets (realtime chat)


  • Jenkins: CI/CD
  • Gitlab: CI/CD
  • Github Actions: CI/CD
  • Bitbucket Pipelines: CI/CD
  • SonarCloud: code quality assurance
  • Fastlane: mobile apps build and deploy




Multijet is a highly opinionated framework/template for building large Web API monorepos in Typescript. Even if Multijet gives highly opinionated default structure and settings, it is very flexible thanks to its tech stack nature (rather than being a "closed" framework).


  • OpenAPI models and routes code generation

  • Microservice monorepo structure

  • Very fast and small builds (using ESBuild to bundle each service)

  • Optimized for serverless environments

  • Portable deploy (integrates AWS Lambda and Dockerfile)

  • Powerful CLI to scaffold new projects and generate code

  • Support for Node.js and Bun runtimes

  • Strict linting and TypeScript-only to ensure safety

  • Integrated logging, dependency injection, automatic routing


  • Fastify: web framework

  • ESBuild: bundler and build tool

  • Turborepo: fast monorepo management

  • npm workspaces: multi project structure



Blaze is a Go minimal template that provides a starting point for new projects. It is designed to be a simple, yet powerful, foundation for building fast, idiomatic and maintainable HTTP services in Go. The template aims to not be opinionated, following the official Google conventions and the Go standard project layout. The main advantage of blaze is the 100% compatibility with the standard library, making it universal and compatible with all the Go http ecosystem (middlewares, routers, adapters, tracing, serverless…).


  • Fast and very low overhead

  • Utility-driven and flexible multi-entrypoint structure

  • Simple to use without “hidden magic”

  • 100% compatible with the Go http standard library

  • Integrated structured logging

  • Seamless AWS Lambda compatibility, optimized for fast cold starts


  • chi: http router (compatible with standard library) 

  • chi/middleware: custom and integrated HTTP middlewares

  • zerolog: structured logging

  • golangci-lint: linter (custom configuration with strict rules)



Cadmium is a minimal starter template to build microservices in Rust. The main goal of Cadmium is to be extremely safe and performant without sacrificing the ease of use. Cadmium is entirely based on the Tokio ecosystem and its layers, providing a reliable and battle tested networking stack.


  • Ergonomic developer experience

  • HTTP and gRPC servers/clients

  • Extremely safe and performant

  • Typesafe handlers and automatic error handling

  • Based on Tokio ecosystem

  • Seamless AWS Lambda compatibility


  • Axum: web server based on Tokio/hyper ecosystem

  • Tonic: tokio layer for gRRC

  • Tokio: Rust asynchronous runtime, ecosystem and networking stack


Meddle is a platform designed to facilitate the connection between diverse data sources and destinations, processing and normalizing data in soft real-time. Meddle integrates sources and destinations with various industrial protocols or communication standards/technologies, including widely used open-source technologies. The software has two main components: the Gateway and the Cloud-based Software as a Service (SaaS). The Gateway is the atomic unit of Meddle, allowing the connection of multiple data sources to multiple data destinations. The SaaS component operates in the cloud, providing a management layer for multiple Gateways and their configurations, thus offering a comprehensive solution for data integration between diverse sources.


  • Integration with industrial protocols: Snap7, Allen Bradley, BacNet, CAN, COAP, Melsec, Modbus, OPC-UA, Omron Fins, MTConnect

  • Integration with open source technologies and communication standards: Apache Kafka, AWS SQS, Prometheus MongoDB, SQL, InfluxDB, MQTT, HTTP, REST

  • Flexible plugin architecture

  • Internal gateway data transform and rule engine


  • Python: core of the Meddle Gateway

  • TypeScript: SaaS Frontend and backend

  • Prometheus: monitoring and analysis

  • Redis: SaaS datastream Pub/Sub

  • WebSockets: SaaS datastream real time logs

  • Docker: containerization (gateway and SaaS)

  • AWS Cloud: SaaS infrastructure

    • S3, ECS (gateway deployment), EC2, Cognito (authentication)


Bubble is a hybrid web2/web3 platform that enables users and music artists to create and exchange NFTs while interacting through social features. Bubble utilizes an Ethereum layer 2, Polygon, for NFT and web3 functionalities.

Thanks to its hybrid nature, the platform seamlessly conceals the complexities and drawbacks associated with web3, simplifying the onboarding process and facilitating the exchange of assets and NFTs among users and artists for a more accessible and user-friendly experience.


  • users can create, mint and sell NFTs to other users

  • artists can create unique NFTs and feature them

  • web2 social network with interactions between users and artists

  • NFT artists card game

  • Metamask/WalletConnect login


  • Polygon: L2 Ethereum blockchain

  • Solidity: smart contracts

  • Next.js: frontend framework

  • TypeScript: frontend and web2 backend

  • Docker: containerization

  • AWS Cloud: web2 infrastructure

  • MongoDB: NoSQL database

    • S3, Cognito (authentication), ECS Fargate (deploy)

IT Consulting

We like to work on internal open source projects, but also to collaborate with companies, with the goal of turning their ideas and requirements into final projects. Not only big companies but also startups, and in a wide variety of sectors (agritech, sports, finance, manifacturing ...) Some of our most notable projects are:

Paint Mixture Neural Network

The customer had a need for a system that would support the formulation of paints (consisting of 13 different mixtures) from a given color in RGB.

A database of about 20k formulated elements was available.

A feed-forward neural network-based system was implemented for the prediction of new formulas.

A dashboard making calls to a backend, which is based on a serverless asynchronous queuing system using AWS SQS and AWS Lambda, was implemented for the client's end use.

Events on the SQS bus trigger the invocation of an asynchronous worker executes the AI script and keeps track on a DB of each executed task and its status.


  • DB of 20k elements (very sparse, various processing was done on the data such as Data preprocessing and Data Augmentation)

  • Execution of asynchronous jobs via queues

  • Monitoring of metrics to avoid issues


  • Python with Keras for Feed-forward Neural Network.

  • NodeJS + Typescript for the worker managing the queues

  • AWS SQS + AWS Lambda for asynchronous job management

  • DynamoDB for data storage

  • React + TS for the dashboard to visualize results

  • AWS Cloudwatch for monitoring system usage and metrics (observability)

Optical Laser

This experimental software is designed to manage a consistent, high-speed binary data stream from a line optical sensor that provides 3D inline inspection via a TCP socket connection. The system processes the data through analysis and filtering, manages alert mechanisms, and handles data streaming to a client. The latter is responsible for displaying charts and related information in real time, ensuring an effective and responsive user interface. 

The laser sensor is positioned above a material moving on a conveyor belt, performing a multi-point distance measurement. The output of the sensor can be simplified into a list of n numerical values (samples) indicating the distance to the material.

The software is strongly data-driven: all measurements, calculations, and analyses are specifically carried out for the type of data obtained from the sensor and, consequently, from the passing material.


  • soft real-time measurements of each sample received from the sensor (very high speed, ~10.000 samples/s)

  • web dashboard to visualize in real time the sample graph, manage sensor and user settings

  • alerting system that triggers when measures exceeds user limits

Given the very high throughput and the concurrent nature of the software, we chose Go as the language for the server. All internal components communicate with each other via synchronized channels. The server runs a variety of concurrent operations:

  • TCP sensor data ingestion 

  • TCP sensor command dispatch

  • REST API to communicate with the web dashboard

  • data parsing, processing and measurement

  • Websockets API to stream throttled data to the web dashboard

  • check alerting based on measurements


  • Go: server language

  • MongoDB: database to store alerting samples and user settings

  • Svelte: web dashboard

  • Grafana and Prometheus: internal software metrics and monitoring

Welding Boards

This software serves as a centralized orchestration system for a network of spot welder microcontrollers within a local environment. It periodically retrieves data from each welder, enabling real-time monitoring and streamlining the aggregation of this data for future analysis. Designed for portability, the software is sold to users with the microcontrollers for spot welders, ensuring a seamless integration and management experience for users.


  • connecting to multiple welder microcontrollers

  • retrieving data from each welder REST API

  • storing every single spot made by every welding machine, enabling later data analysis

  • sending commands and settings to each welder

  • managing welder settings backups

  • visualizing connected welders, data and their details in a Web interface

  • the entire software stack must be easily portable and installable in any server

  • license key management

  • Over the air updates for every user

Each welder exposes a documented Json HTTP API, enabling the server to easily retrieve and send data from those. 

The server is divided into 3 main components: API (business logic for the web dashboard), Daemon (welders manager) and Worker (single welder data scraper). Each component is decoupled and communicates with each other using an Event Bus.

When adding a new welder the user must specify its IP address, the Daemon then starts a Worker that scrapes data, stores it and manage the welder status.

The software is distributed as a Virtual Machine Disk (VMWare, VirtualBox and Hyper-V), containing the entire tech stack.


  • React: web dashboard

  • Node.js and TypeScript: API, Worker and Daemon

  • Go: OTA update manager, License key management

  • MongoDB: main database for business logic and welder data

  • Redis: cache, Event Bus

  • NGINX: reverse proxy

  • Docker: containerization for each component

  • Debian: OS delivered in the final virtual disk

Anti money laundering Data Warehouse

This project addresses the critical need to establish a computerized database dedicated to anti-money laundering compliance. It aims to facilitate data and information retention in accordance with regulatory guidelines, automate risk profile calculations, feed into the automated analysis system for abnormal operations, and automate the annual self-assessment process.

Features & Metodolgies:

  • Data Warehousing Solution: the project implements a comprehensive data warehousing solution tailored for anti-money laundering compliance.

  • REST Interface: The system provides interaction capabilities through a REST interface, ensuring seamless communication with other components.

  • Microservices Architecture: Leveraging microservices architecture to exploit scalability and flexibility.

  • Domain-driven development (DDD): Aligning software design with business domain.

  • Database Management: Utilizes TypeORM and AWS RDS with Aurora PostgreSQL for efficient database management and storage.

  • KYC Evaluation Module: dedicated module for KYC evaluation, enabling organizations to assess customer identities, verify documentation, and flag suspicious activities.

Technology Stack:

  • Nest.js: High level framework for building efficient and scalable server-side applications in Node.js

  • AWS Lambda: Serverless computing service for running code without provisioning or managing servers.

  • TypeORM: ORM (Object-Relational Mapping) library for TypeScript and JavaScript, simplifying database management.

  • AWS RDS (Aurora PostgreSQL): Managed relational database service by AWS for seamless database operations.

  • AWS SAM: Serverless application model for defining and deploying serverless applications on AWS.

  • GitLab CI: Continuous integration and delivery platform for automating testing and deployment processes.