This portfolio showcases our expertise and ability to combine cutting-edge technologies to deliver tangible solutions that address real business needs. Through a selection of case studies, we demonstrate how DMBI Consultants has helped companies of all sizes and across industries achieve their business goals. The presented cases represent just a glimpse of our capabilities and commitment to innovation. We are constantly researching and developing new solutions to help our clients maximize the value of emerging technologies.
Imagine a tool that allows you to monitor the creditworthiness of customers and identify potential risks.
The project involved the creation of a sophisticated credit risk monitoring tool, using the SAS and Tableau programming languages.
This tool was designed to provide a comprehensive and detailed overview of credit exposures, enabling financial institutions to make informed and proactive decisions to mitigate risks.
Dynamic and interactive dashboards have been created using Tableau Business Intelligence software, providing a comprehensive and detailed overview of the bank’s financial situation and customer credit quality.
The dashboards offer an immediate and intuitive view of various metrics regarding:
The project’s goal was to gain a deeper understanding of the insurance business to optimize performance and customer satisfaction.
Through a data integration process, indicators related to premium collection and claims settlement for insurance products were collected and consolidated .
This data was then used to populate dashboards that allow for monitoring the trend of key KPIs for the business.
In order to facilitate a thorough analysis of business performance, the data has been structured in a multi-factorial data warehouse based on Oracle DB.
The organization of the data, segmented by territory and divided into monthly and weekly intervals, has allowed for granular extraction of information.
To meet the specific needs of each business line, insurance product or territory, customized KPIs and reports have been developed.
The robustness of the system has been guaranteed by the implementation of a rigorous calculation logic for reserves, lapse and claim rates.
Illegal land occupation for waste disposal is a problem that causes serious enviromental and health damage.
To combat this phenomenon, a local administration has launched a project aimed at identifying, through high-resolution satellite images, geographical areas where illegal landfills may be present.
The use of Machine Learning algorithms allows for the rapid and efficient analysis of large quantities of images. The information obtained can be used to plan targeted control interventions by component authorities.
To implement the project, an AI model was developed that analyzes images of the territory, divided into pixels or tiels.
The model, through sophisticated algorithms, is able to recognize the typical characteristics of an illegal landfill.
This technology, combined with an approach that involves institutions, can contribute to creating a more sustainable and cleaner future.
The Italian Power Exchange (IPE) is a wholesale market where electricity and natural gas are bought and sold.
The National Single Price (PUN) is Italy’s reference electricity price. It is set for each hour of the day and represents the weighted average of the zonal prices of electricity sold.
This project aimed to equip energy market players with innovative tools to navigate the sector’s complexity with greater awareness and proactivity.
The goal was to create a predictive model capable of anticipating the daily trend of the PUN, providing operators in the sector with a strategic overview to optimize their choices.
The PUN forecasting project adopted an innovative approach to estimate the trend of energy prices.
Through the simulation of stochastic models, the complex mechanisms that influence the dynamics of PUN have been replicated.
By integrating Data Science and Business Intelligence techniques, it was possible to extract knowledge from a vast amount of historical data, identifying the key factors that influence price fluctuations.
The project was developed using Python language.
Auricular acupuncture is a technique that involves stimulating specific points on the auricle (the outer ear) with thin needles.
In collaboration with researchers from Umberto I University Hospital, a study was conducted to evaluate auricular acupuncture as a method for pain relief. The study collected and analyzed both clinical and molecular data to obtain a comprehensive picture of the effectiveness of this technique.
The researchers sought to identify biomarkers associated with the therapeutic effects of auricular acupuncture.
Previous studies have shown that acupuncture can be effective in treating various types of pain, including headaches, joint pain, and menstrual cramps.
In this project, clinical data associated with four different patient groups (pain patients, active treatment group, placebo treatment group, and control group) were processed to monitor their pain levels.
Saliva samples were also collected before and after treatment to assess changes in salivary biomarkers.
The collected data were analyzed using inferential statistical techniques, including t-tests and regression models, to identify differences between groups and correlations between variables.
The results were organized into comprehensive reports generated using SAS.
Rapid palatal expansion (ERP) is an orthodontic procedure used to widen the palate in children with a narrow jaw. Despite its effectiveness, it is a method that can cause pain and discomfort in patients.
The purpose of the project, carried out in collaboration with the Umberto I University Hospital of Rome, was to provide information on the nature of the pain associated with ERP.
The information collected may be useful to orthodontists to improve patient comfort and personalize treatment.
The project involved the analysis of questionnaires completed by children undergoing ERP treatment.
Through the data collected, it was possible to identify any correlations between different variables, such as the age, sex, and skeletal maturity of the children and their perception of pain.
The results obtained were presented in detailed reports, created with SAS, wich included tables, graphs and statistical indicators.
In collaboration with the Sapienza University of Rome, we conducted a study on a dataset of patients who underwent myocardial recascularization (a surgical procedure that involves removing a portion of the thickened muscle and normalizing mitral valve function).
The goal of the project was to create a model, using descriptive and inferential statistical techniques, that could help surgeons assess the appropriateness of the procedure.
To carry out the project, a dataset with information on patient parameters before and after surgery was created.
Hierarchical clustering models were developed to group patients based on the similarities of their pre- and post-operative parameters.
A logistic regression model was built using pre-operative parameters to predict the probability of improvement in parameters after myectomy.
Climate change is intensifying extreme weather events, causing severe damage to agricolture.
As a result, crop insurance polices are becoming increasingly important. To optimize the management of these policies, our project involved developing a model to estimate the annual number of high-risk weather days. This has made it possible to define more accurate insurance premiums that are better suited to the actual leve of risk.
The project involved creating a database to collect the NASA MERRA mission dataset.
Using this dataset, a Machine Learning model was trained to estimate the future days at risk of catastrophic crop events.
The resulting model was then used to optimize the price of the crop damage insurance policy.
Imagine being able to know the audience’s liking for your TV shows through the opinions expressed on social media.
The goal of the project was to automatically evaluate the sentiment, or emotional orientation (positive, negative, or neutral), of tweets related to the client’s TV shows.
The analysis was carried out through a comment analysis, divided into negative, positive, and neutral, using Natural Language Processing (NLP) techniques.
The project was developed using Python (open source).
The project was carried out by collecting colloquial texts for training the NLP pipeline, in order to allow the model to understand the desired linguistic context.
A deep learning model (Keras, Scikit Learn, Tensorflow) was implemented for sentiment prediction.
The activity was carried out using open-source tools.
In the banking world, credit risk assessment is a crucial activity.
To keep pace with the growing volume of data, a bank decided to develop an intelligent customer credit risk classification system with us.
The project involved the development of a text extraction engine based on Natural Language Processing (NLP) for analyzing credit reports.
The engine, implemented in Python, used a multi-class classification model to automatically categorize reports based on the customer’s credit risk.
The benefits of this system included the automation of the classification process, improved accuracy in credit risk assestment and reduced analysis time and costs.
To develop this project, a dataset of credit reports labeled by managers with the risk calssification (“Unlikely to Pay” or “Likely to Pay”) was created.
The text mining engine was analyzed and optimized to improve its performance and reliability.
The work carried out has made it possible to increase the accuracy in the classification of credit reports.
The integration of this system with other analysis systems has made it possible to obtain a more complete view of the financial situation of each customer.
Imagine a system that allows you to know when a router is about to fail, before it causes problems on your network.
This project aimed to optimize the maintenance scheduling of a series of routers by using a predictive RUL (Remaining Useful Life) model for each unit.
This model, based on telemetry variables and sensor data collected from the routers themselves, allows to predict the remaining life of each router before it needs maintenance or repair.
As a result, it was possible to reduce unplanned downtime, improve machine reliability, and optimize maintenance costs.
The project was carried out in various operational phases.
A first phase involved data acquisition: various data were collected from sources such as sensors, system logs, and performance records.
Subsequently, time series were structured: the data collected over time were transformed into series, cleaned, integrated, nd normalized to ensure their consistency.
After performing a stistical analysis of the measurement, a Deep Learning model was implemented for a fault prediction.
Amidst a growing focus on energy efficiency and sustainability, coupled with intensifying competition, businesses are grappling with the challenge of optimizing their energy consumption and reducing costs.
To address these challenges, we have developed an innovative system for analyzing the energy consumption of business customers, leveraging artificial intelligence and cloud computing to enhance operational efficiency and sales strategies.
To realize this project, a cloud infrastructure (serverless and scalable) based on Amazon Web Services (AWS) was created, enabling the collection, storage, and analysis of large volumes of data on customer consumption and activities.
In addition, a library of custom Machine Learning algorithms was developed to extract valuable insights from customer energy consumption and activity data.
Algorithms such as linear regression, gradient boosting, isolation forest, and K-means allow us to classify customers, understand the causes of consumption spikes, and predict future demand.
The project involved the development of an infrastructure for monitoring the performance of power towers in specific geographical areas.
Based on the climatic and geomorphological conditions of the installation areas, it was possible to create a model capable of predicting potential malfunctions or failures of the towers.
The goal was to avoid sending maintenance personnel on site to monitor the infrastructure, with obvious saving in maintenance costs and greater efficiency of the electricity grid.
The implementation of this intelligent monitoring system was based on a complex workflow that involved several innovative technologies.
First, a massive collection of unstructured data was carried out form IoT (Internet of Things) devices installed on the power grids.
A mix of open source (flexible and scalable) and enterprise (secure and reliable) software was used to process and anlalyze this complex data.
The data used was stored on the MongoDB platform, a high-performance and scalable NoSQL database.