Stochastic Gradient Methods
Authorship
M.A.G.
Double Bachelor's Degree in Informatics Engineering and Mathematics
M.A.G.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
02.12.2026 09:00
02.12.2026 09:00
Summary
This undergraduate thesis examines Stochastic Gradient Descent (SGD) methods as a cornerstone for solving large-scale optimization problems, particularly within the field of machine learning. Traditional gradient descent (GD) faces prohibitive computational limitations when dealing with massive datasets, as it requires processing the entire set of observations at each iteration. As an alternative, SGD employs noisy but efficient gradient estimates derived from random samples. Throughout this work, the regularity conditions necessary to ensure the method's stability, such as smoothness in expectation and variance control, are studied. Convergence results are analyzed for convex and strongly convex functions, as well as those satisfying the Polyak-Lojasiewicz condition. Finally, highly practical variants such as minibatch SGD and the momentum method are explored, justifying how these strategies mitigate noise and enhance optimization dynamics in ill-conditioned or over-parameterized scenarios.
This undergraduate thesis examines Stochastic Gradient Descent (SGD) methods as a cornerstone for solving large-scale optimization problems, particularly within the field of machine learning. Traditional gradient descent (GD) faces prohibitive computational limitations when dealing with massive datasets, as it requires processing the entire set of observations at each iteration. As an alternative, SGD employs noisy but efficient gradient estimates derived from random samples. Throughout this work, the regularity conditions necessary to ensure the method's stability, such as smoothness in expectation and variance control, are studied. Convergence results are analyzed for convex and strongly convex functions, as well as those satisfying the Polyak-Lojasiewicz condition. Finally, highly practical variants such as minibatch SGD and the momentum method are explored, justifying how these strategies mitigate noise and enhance optimization dynamics in ill-conditioned or over-parameterized scenarios.
Direction
GONZALEZ DIAZ, JULIO (Tutorships)
GONZALEZ DIAZ, JULIO (Tutorships)
Court
VAZQUEZ CENDON, MARIA ELENA (Chairman)
LOPEZ SOMOZA, LUCIA (Secretary)
CASAS MENDEZ, BALBINA VIRGINIA (Member)
VAZQUEZ CENDON, MARIA ELENA (Chairman)
LOPEZ SOMOZA, LUCIA (Secretary)
CASAS MENDEZ, BALBINA VIRGINIA (Member)
Avoiding catastrophic forgetting in a continual learning framework for tabular data regression problems
Authorship
M.A.G.
Double Bachelor's Degree in Informatics Engineering and Mathematics
M.A.G.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
02.20.2026 09:45
02.20.2026 09:45
Summary
This Bachelor's Thesis presents a continual and incremental learning solution for multivariate regression problems with tabular data. The research focuses on the adaptation and extension of the TRIL3 framework, whose original functionality was limited to classification, to address regression scenarios in a task-free context. The proposed methodology combines the use of the XuILVQ prototype model for the generation of synthetic data with mixture density networks as a predictive model. This approach allows for the mitigation of catastrophic forgetting in online learning environments and with the presence of concept drift. The effectiveness of the system was validated through a battery of experiments with reference datasets, contrasting the results with the state of the art. The results obtained demonstrate that the adapted solution maintains high robustness against forgetting and superior memory efficiency, with a very reduced ratio of stored prototypes relative to the total volume of data. Due to its characteristics, the proposal is especially suitable for its implementation in edge computing devices within the Industry 4.0 paradigm.
This Bachelor's Thesis presents a continual and incremental learning solution for multivariate regression problems with tabular data. The research focuses on the adaptation and extension of the TRIL3 framework, whose original functionality was limited to classification, to address regression scenarios in a task-free context. The proposed methodology combines the use of the XuILVQ prototype model for the generation of synthetic data with mixture density networks as a predictive model. This approach allows for the mitigation of catastrophic forgetting in online learning environments and with the presence of concept drift. The effectiveness of the system was validated through a battery of experiments with reference datasets, contrasting the results with the state of the art. The results obtained demonstrate that the adapted solution maintains high robustness against forgetting and superior memory efficiency, with a very reduced ratio of stored prototypes relative to the total volume of data. Due to its characteristics, the proposal is especially suitable for its implementation in edge computing devices within the Industry 4.0 paradigm.
Direction
MERA PEREZ, DAVID (Tutorships)
Fernández Castro, Bruno (Co-tutorships)
MERA PEREZ, DAVID (Tutorships)
Fernández Castro, Bruno (Co-tutorships)
Court
Barro Ameneiro, Senén (Chairman)
GARCIA FERNANDEZ, JULIAN (Secretary)
LADRA GONZALEZ, MANUEL EULOGIO (Member)
Barro Ameneiro, Senén (Chairman)
GARCIA FERNANDEZ, JULIAN (Secretary)
LADRA GONZALEZ, MANUEL EULOGIO (Member)
Commutative algebras of finite type over a field
Authorship
I.A.G.
Bachelor of Mathematics
I.A.G.
Bachelor of Mathematics
Defense date
02.13.2026 10:00
02.13.2026 10:00
Summary
The aim of this project is to provide an introduction to Commutative algebra, with the goal of studying algebras of finite type over a field and proving two fundamental results in this in this area: Hilbert’s theorem of zeros and Noether normalization lemma. The first chapter will be intended to complement the knowledge of rings and modules already acquired in the subjects of the Degree. Then, there will come two chapters of commutative algebra, one dedicated to rings and modules of fractions and the other to integer extensions of rings. Subsequently, the knowledge of extensions of fields will be expanded, to finally reach the study of algebras of finite type over a field and prove the results of Commutative algebra already cited.
The aim of this project is to provide an introduction to Commutative algebra, with the goal of studying algebras of finite type over a field and proving two fundamental results in this in this area: Hilbert’s theorem of zeros and Noether normalization lemma. The first chapter will be intended to complement the knowledge of rings and modules already acquired in the subjects of the Degree. Then, there will come two chapters of commutative algebra, one dedicated to rings and modules of fractions and the other to integer extensions of rings. Subsequently, the knowledge of extensions of fields will be expanded, to finally reach the study of algebras of finite type over a field and prove the results of Commutative algebra already cited.
Direction
GARCIA RODICIO, ANTONIO (Tutorships)
ALVITE PAZO, SAMUEL (Co-tutorships)
GARCIA RODICIO, ANTONIO (Tutorships)
ALVITE PAZO, SAMUEL (Co-tutorships)
Court
ALVITE PAZO, SAMUEL (Student’s tutor)
GARCIA RODICIO, ANTONIO (Student’s tutor)
ALVITE PAZO, SAMUEL (Student’s tutor)
GARCIA RODICIO, ANTONIO (Student’s tutor)
Nonlinear Connections and Second-Order Differential Equations
Authorship
T.G.B.C.
Bachelor of Mathematics
T.G.B.C.
Bachelor of Mathematics
Defense date
02.12.2026 12:00
02.12.2026 12:00
Summary
The main objective of this work is to provide a geometric perspective on the study of second-order differential equations (SODEs) in the context of the tangent bundle of Rn. First, a detailed construction of the tangent bundle, its structure, and the vector fields defined on it is carried out, and then this description is extended to the tangent bundle of TRn. This extension allows for the introduction of the canonical tangent structure and the vertical subbundle, which are fundamental elements in the geometric treatment of SODEs. Subsequently, SODEs are characterized as vector fields whose integral curves satisfy systems of second-order differential equations. Through the canonical tangent structure, a correspondence is established between these vector fields and the nonlinear connections defined on the tangent bundle. The concepts of horizontal and vertical projections, as well as the splitting of short exact sequences, are also introduced and used in this formulation. The work concludes by showing how every SODE induces a nonlinear connection, and vice versa, allowing for a unified interpretation. Finally, the concept of linearizable SODEs is introduced, and a necessary condition for linearizability is provided, thus completing the geometric framework of the study.
The main objective of this work is to provide a geometric perspective on the study of second-order differential equations (SODEs) in the context of the tangent bundle of Rn. First, a detailed construction of the tangent bundle, its structure, and the vector fields defined on it is carried out, and then this description is extended to the tangent bundle of TRn. This extension allows for the introduction of the canonical tangent structure and the vertical subbundle, which are fundamental elements in the geometric treatment of SODEs. Subsequently, SODEs are characterized as vector fields whose integral curves satisfy systems of second-order differential equations. Through the canonical tangent structure, a correspondence is established between these vector fields and the nonlinear connections defined on the tangent bundle. The concepts of horizontal and vertical projections, as well as the splitting of short exact sequences, are also introduced and used in this formulation. The work concludes by showing how every SODE induces a nonlinear connection, and vice versa, allowing for a unified interpretation. Finally, the concept of linearizable SODEs is introduced, and a necessary condition for linearizability is provided, thus completing the geometric framework of the study.
Direction
SALGADO SECO, MODESTO RAMON (Tutorships)
SALGADO SECO, MODESTO RAMON (Tutorships)
Court
SALGADO SECO, MODESTO RAMON (Student’s tutor)
SALGADO SECO, MODESTO RAMON (Student’s tutor)
The Traveling Salesman Problem: Formulation, Solution, and Applications
Authorship
C.B.G.
Bachelor of Mathematics
C.B.G.
Bachelor of Mathematics
Defense date
02.12.2026 16:00
02.12.2026 16:00
Summary
This paper focuses on the study of the Traveling Salesman Problem (TSP) through the description of three alternative formulations: the Dantzig-Fulkerson-Johnson (DFJ), the Miller-Tucker-Zemlin (MTZ), and the Single Commodity Flow formulation (SCF); focusing on the particular characteristics of each one. An analysis of resolution methods for a particular case, the Metric TSP, will also be carried out. Previously, an introduction to optimization problems and graph theory will be provided, as well as to minimum cost network flow problems, highlighting their low complexity compared to the TSP. Finally, as a practical complement to the theoretical properties presented, a computational study on the different formulations addressed will be presented.
This paper focuses on the study of the Traveling Salesman Problem (TSP) through the description of three alternative formulations: the Dantzig-Fulkerson-Johnson (DFJ), the Miller-Tucker-Zemlin (MTZ), and the Single Commodity Flow formulation (SCF); focusing on the particular characteristics of each one. An analysis of resolution methods for a particular case, the Metric TSP, will also be carried out. Previously, an introduction to optimization problems and graph theory will be provided, as well as to minimum cost network flow problems, highlighting their low complexity compared to the TSP. Finally, as a practical complement to the theoretical properties presented, a computational study on the different formulations addressed will be presented.
Direction
GONZALEZ RUEDA, ANGEL MANUEL (Tutorships)
GONZALEZ RUEDA, ANGEL MANUEL (Tutorships)
Court
GARCIA RIO, EDUARDO (Chairman)
GARCIA LUCAS, DIEGO (Secretary)
SANCHEZ SELLERO, CESAR ANDRES (Member)
GARCIA RIO, EDUARDO (Chairman)
GARCIA LUCAS, DIEGO (Secretary)
SANCHEZ SELLERO, CESAR ANDRES (Member)
Semidefinite optimization in polynomial optimization algorithms
Authorship
M.C.R.
Double Bachelor's Degree in Informatics Engineering and Mathematics
M.C.R.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
02.12.2026 16:30
02.12.2026 16:30
Summary
This bacehlor thesis focuses on the study and application of semidefinite optimization techniques to polynomial optimization problems. It begins with the presentation of the fundamental concepts of polynomial optimization. Subsequently, the RLT technique is introduced as an algorithm for solving these types of problems, along with a practical example that illustrates its application. The following section addresses the principles of semidefinite optimization, highlighting a specific technique known as SDP cuts, which forms the basis of the computational study carried out in the final chapter. This study is conducted using the global optimization solver RAPOSa to evaluate the impact of SDP cuts on its performance. Furthermore, a sharper version of the current implementation is proposed, seeking to enhance computational efficiency.
This bacehlor thesis focuses on the study and application of semidefinite optimization techniques to polynomial optimization problems. It begins with the presentation of the fundamental concepts of polynomial optimization. Subsequently, the RLT technique is introduced as an algorithm for solving these types of problems, along with a practical example that illustrates its application. The following section addresses the principles of semidefinite optimization, highlighting a specific technique known as SDP cuts, which forms the basis of the computational study carried out in the final chapter. This study is conducted using the global optimization solver RAPOSa to evaluate the impact of SDP cuts on its performance. Furthermore, a sharper version of the current implementation is proposed, seeking to enhance computational efficiency.
Direction
GONZALEZ DIAZ, JULIO (Tutorships)
GONZALEZ RODRIGUEZ, BRAIS (Co-tutorships)
GONZALEZ DIAZ, JULIO (Tutorships)
GONZALEZ RODRIGUEZ, BRAIS (Co-tutorships)
Court
GARCIA RIO, EDUARDO (Chairman)
GARCIA LUCAS, DIEGO (Secretary)
SANCHEZ SELLERO, CESAR ANDRES (Member)
GARCIA RIO, EDUARDO (Chairman)
GARCIA LUCAS, DIEGO (Secretary)
SANCHEZ SELLERO, CESAR ANDRES (Member)
Statistical Classification Techniques
Authorship
D.G.F.
Bachelor of Mathematics
D.G.F.
Bachelor of Mathematics
Defense date
02.12.2026 17:00
02.12.2026 17:00
Summary
This dissertation analyses the mathematical foundations of statistical classification in supervised learning, and aims to present an organic development of classification techniques, from classical models to modern algorithmic methods. After establishing empirical and Bayes risk minimization as the theoretical benchmark, the text examines classic parametric models such as Fisher's linear discriminant, LDA and QDA, discussing their derivation and underlying assumptions. Next, we define consistency, lay the foundations for the use of the regression function as a basis for classification and establish the necessary conditions for universal consistency for some non-parametric rules based on curve estimation: partition rules, Kernel rules and the k-Nearest Neighbors algorithm, where we also address asymptotic bounds for fixed k. In the last chapter, we detail the CART algorithm and analyse impurity functions, stopping criteria and pruning mechanisms for classification trees. We further expand the focus to Random Forests, where we discuss bootstrap and bagging techniques, and inter-tree correlation. Finally, we delve into neural networks architecture and parameter optimization through backpropagation and stochastic gradient descent. As a comparative summary, each chapter includes an empirical analysis of the advantages and limitations of its respective models.
This dissertation analyses the mathematical foundations of statistical classification in supervised learning, and aims to present an organic development of classification techniques, from classical models to modern algorithmic methods. After establishing empirical and Bayes risk minimization as the theoretical benchmark, the text examines classic parametric models such as Fisher's linear discriminant, LDA and QDA, discussing their derivation and underlying assumptions. Next, we define consistency, lay the foundations for the use of the regression function as a basis for classification and establish the necessary conditions for universal consistency for some non-parametric rules based on curve estimation: partition rules, Kernel rules and the k-Nearest Neighbors algorithm, where we also address asymptotic bounds for fixed k. In the last chapter, we detail the CART algorithm and analyse impurity functions, stopping criteria and pruning mechanisms for classification trees. We further expand the focus to Random Forests, where we discuss bootstrap and bagging techniques, and inter-tree correlation. Finally, we delve into neural networks architecture and parameter optimization through backpropagation and stochastic gradient descent. As a comparative summary, each chapter includes an empirical analysis of the advantages and limitations of its respective models.
Direction
GONZALEZ MANTEIGA, WENCESLAO (Tutorships)
GONZALEZ MANTEIGA, WENCESLAO (Tutorships)
Court
GARCIA RIO, EDUARDO (Chairman)
GARCIA LUCAS, DIEGO (Secretary)
SANCHEZ SELLERO, CESAR ANDRES (Member)
GARCIA RIO, EDUARDO (Chairman)
GARCIA LUCAS, DIEGO (Secretary)
SANCHEZ SELLERO, CESAR ANDRES (Member)
Lie algebras and root systems
Authorship
A.I.Q.
Bachelor of Mathematics
A.I.Q.
Bachelor of Mathematics
Defense date
02.12.2026 17:30
02.12.2026 17:30
Summary
In this work, we are going to introduce Lie algebras. In particular, we will explore the clas- sification of complex semisimple Lie algebras through Dynkin diagrams. First, the basics of Lie theory will be addressed, providing definitions, properties, and some examples. Then, in order to study complex semisimple Lie algebras, we will introduce abstract root systems by exami- ning their basic properties and by previously discussing their fundamental basis, namely Cartan subalgebras. Once we understand abstract root systems, we will finally study their relation to Cartan matrices and Dynkin diagrams, concluding with their classification.
In this work, we are going to introduce Lie algebras. In particular, we will explore the clas- sification of complex semisimple Lie algebras through Dynkin diagrams. First, the basics of Lie theory will be addressed, providing definitions, properties, and some examples. Then, in order to study complex semisimple Lie algebras, we will introduce abstract root systems by exami- ning their basic properties and by previously discussing their fundamental basis, namely Cartan subalgebras. Once we understand abstract root systems, we will finally study their relation to Cartan matrices and Dynkin diagrams, concluding with their classification.
Direction
DOMINGUEZ VAZQUEZ, MIGUEL (Tutorships)
DOMINGUEZ VAZQUEZ, MIGUEL (Tutorships)
Court
GARCIA RIO, EDUARDO (Chairman)
GARCIA LUCAS, DIEGO (Secretary)
SANCHEZ SELLERO, CESAR ANDRES (Member)
GARCIA RIO, EDUARDO (Chairman)
GARCIA LUCAS, DIEGO (Secretary)
SANCHEZ SELLERO, CESAR ANDRES (Member)
Rethinking the basic integration course
Authorship
M.L.R.
Bachelor of Mathematics
M.L.R.
Bachelor of Mathematics
Defense date
02.13.2026 12:00
02.13.2026 12:00
Summary
The theory of integration plays a central role in mathematical analysis and yet the Riemann integral has important limitations when dealing with functions with many discontinuities or in more general convergence settings. In this Bachelor’s thesis, the Newton integral in the version revised by Koliha, to which we will refer as the Newton Koliha integral, is studied as a didactic alternative for basic integration courses. First, the Newton Koliha integral is rigorously introduced, highlighting the role of continuous primitives and of conditions that hold “at practically every point”. Then the fundamental properties of the integral are presented, and they are proved using both integration theories, Newton Koliha and Riemann, thereby making the differences between the two developments apparent. Subsequently, connections with the Lebesgue integral are established and several examples are analysed that illustrate the agreements and differences between the Riemann, Newton Koliha and Lebesgue frameworks. Finally, the integrability of continuous functions on compact intervals is proved by means of an updated version of Peano’s existence theorem for ordinary differential equations, and extensions to infinite intervals and the geometric interpretation of the integral as area are discussed.
The theory of integration plays a central role in mathematical analysis and yet the Riemann integral has important limitations when dealing with functions with many discontinuities or in more general convergence settings. In this Bachelor’s thesis, the Newton integral in the version revised by Koliha, to which we will refer as the Newton Koliha integral, is studied as a didactic alternative for basic integration courses. First, the Newton Koliha integral is rigorously introduced, highlighting the role of continuous primitives and of conditions that hold “at practically every point”. Then the fundamental properties of the integral are presented, and they are proved using both integration theories, Newton Koliha and Riemann, thereby making the differences between the two developments apparent. Subsequently, connections with the Lebesgue integral are established and several examples are analysed that illustrate the agreements and differences between the Riemann, Newton Koliha and Lebesgue frameworks. Finally, the integrability of continuous functions on compact intervals is proved by means of an updated version of Peano’s existence theorem for ordinary differential equations, and extensions to infinite intervals and the geometric interpretation of the integral as area are discussed.
Direction
LOPEZ POUSO, RODRIGO (Tutorships)
LOPEZ POUSO, RODRIGO (Tutorships)
Court
LOPEZ POUSO, RODRIGO (Student’s tutor)
LOPEZ POUSO, RODRIGO (Student’s tutor)
Mathematical Aspects of Economic Theories
Authorship
Á.M.P.
Bachelor of Mathematics
Á.M.P.
Bachelor of Mathematics
Defense date
02.12.2026 09:45
02.12.2026 09:45
Summary
Positive entries matrices theory turns out to be highly useful for economic modeling. By representing the connections between productive sectors as a directed graph with an associated adjacency matrix, results such as the Perron Frobenius Theorem and the Frobenius Normal Form are found to have an interesting economic interpretation. In this paper, we aim to rigorously present the theoretical and practical richness of modelling economies in this way. In this regard, we will first study the Leontief model in detail, and afterwards, the one introduced by Sraffa in Production of Commodities by Means of Commodities. By the end of this work, we will have understood the impact of the Perron Frobenius Theorem on an issue seemingly far removed from Algebra, such as the development of capitalism, in light of Okishio Theorem. We will also have succesfully established the relationship between productivity and the dominant eigenvalue of a matrix.
Positive entries matrices theory turns out to be highly useful for economic modeling. By representing the connections between productive sectors as a directed graph with an associated adjacency matrix, results such as the Perron Frobenius Theorem and the Frobenius Normal Form are found to have an interesting economic interpretation. In this paper, we aim to rigorously present the theoretical and practical richness of modelling economies in this way. In this regard, we will first study the Leontief model in detail, and afterwards, the one introduced by Sraffa in Production of Commodities by Means of Commodities. By the end of this work, we will have understood the impact of the Perron Frobenius Theorem on an issue seemingly far removed from Algebra, such as the development of capitalism, in light of Okishio Theorem. We will also have succesfully established the relationship between productivity and the dominant eigenvalue of a matrix.
Direction
FERNANDEZ TOJO, FERNANDO ADRIAN (Tutorships)
Carcacia Campos, Isaac (Co-tutorships)
FERNANDEZ TOJO, FERNANDO ADRIAN (Tutorships)
Carcacia Campos, Isaac (Co-tutorships)
Court
VAZQUEZ CENDON, MARIA ELENA (Chairman)
LOPEZ SOMOZA, LUCIA (Secretary)
CASAS MENDEZ, BALBINA VIRGINIA (Member)
VAZQUEZ CENDON, MARIA ELENA (Chairman)
LOPEZ SOMOZA, LUCIA (Secretary)
CASAS MENDEZ, BALBINA VIRGINIA (Member)
Some mathematical aspects in musical analysis
Authorship
I.N.C.
Bachelor of Mathematics
I.N.C.
Bachelor of Mathematics
Defense date
02.13.2026 09:00
02.13.2026 09:00
Summary
In this work, we examine how sound is generated from the vibration of a string, identifying the fundamental mode of vibration and its harmonics. These harmonics, in varying degrees of volume, define the timbre that characterizes an instrument. We will study these aspects through the one-dimensional wave equation and its resolution. Subsequently, we analyze how sounds combine to create musical scales. The construction of these scales is carried out by fifths, which is equivalent to performing an irrational rotation of the circle, causing the cycle not to close exactly. Following this, we use continued fractions to find the ideal number of notes in a scale in order to minimize the error. Finally, we introduce the necessary procedures for noise removal in a sound based on the Fourier Transform. With this tool, we decompose a complex signal (composed of many different sounds) into its constituent pure frequencies, enabling the elimination of unwanted ones.
In this work, we examine how sound is generated from the vibration of a string, identifying the fundamental mode of vibration and its harmonics. These harmonics, in varying degrees of volume, define the timbre that characterizes an instrument. We will study these aspects through the one-dimensional wave equation and its resolution. Subsequently, we analyze how sounds combine to create musical scales. The construction of these scales is carried out by fifths, which is equivalent to performing an irrational rotation of the circle, causing the cycle not to close exactly. Following this, we use continued fractions to find the ideal number of notes in a scale in order to minimize the error. Finally, we introduce the necessary procedures for noise removal in a sound based on the Fourier Transform. With this tool, we decompose a complex signal (composed of many different sounds) into its constituent pure frequencies, enabling the elimination of unwanted ones.
Direction
BUEDO FERNANDEZ, SEBASTIAN (Tutorships)
Rodríguez López, Rosana (Co-tutorships)
BUEDO FERNANDEZ, SEBASTIAN (Tutorships)
Rodríguez López, Rosana (Co-tutorships)
Court
VAZQUEZ CENDON, MARIA ELENA (Chairman)
LOPEZ SOMOZA, LUCIA (Secretary)
CASAS MENDEZ, BALBINA VIRGINIA (Member)
VAZQUEZ CENDON, MARIA ELENA (Chairman)
LOPEZ SOMOZA, LUCIA (Secretary)
CASAS MENDEZ, BALBINA VIRGINIA (Member)
Introduction to Optimal Packing Problems
Authorship
J.M.O.C.
Double Bachelor's Degree in Informatics Engineering and Mathematics
J.M.O.C.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
02.13.2026 09:45
02.13.2026 09:45
Summary
This work develops an introduction to the fundamental concepts, models, and algorithms of cutting and packing problems. To this end, a theoretical framework is initially established to unify the mathematical formulation of these problems. Following this foundation, three classic one-dimensional problems are analyzed in depth. First, the Knapsack Problem is studied, a value maximization problem for which two exact solution methods are presented, based on Dynamic Programming and Branch and Bound. Next, the Bin Packing Problem is addressed, which focuses on minimizing the use of resources. For this problem, the main heuristic algorithms are described, and an analysis of their performance is introduced using lower bounds and worst-case ratios. Thirdly, the Cutting Stock Problem is presented, a problem of great industrial relevance, whose solution is approached by obtaining the solution of the problem's linear relaxation through the column generation technique. Finally, to complete the study, a brief incursion into two-dimensional problems is made, illustrating how geometric complexity is often managed by reducing the problem to its already studied one-dimensional analogs.
This work develops an introduction to the fundamental concepts, models, and algorithms of cutting and packing problems. To this end, a theoretical framework is initially established to unify the mathematical formulation of these problems. Following this foundation, three classic one-dimensional problems are analyzed in depth. First, the Knapsack Problem is studied, a value maximization problem for which two exact solution methods are presented, based on Dynamic Programming and Branch and Bound. Next, the Bin Packing Problem is addressed, which focuses on minimizing the use of resources. For this problem, the main heuristic algorithms are described, and an analysis of their performance is introduced using lower bounds and worst-case ratios. Thirdly, the Cutting Stock Problem is presented, a problem of great industrial relevance, whose solution is approached by obtaining the solution of the problem's linear relaxation through the column generation technique. Finally, to complete the study, a brief incursion into two-dimensional problems is made, illustrating how geometric complexity is often managed by reducing the problem to its already studied one-dimensional analogs.
Direction
GONZALEZ DIAZ, JULIO (Tutorships)
GONZALEZ DIAZ, JULIO (Tutorships)
Court
VAZQUEZ CENDON, MARIA ELENA (Chairman)
LOPEZ SOMOZA, LUCIA (Secretary)
CASAS MENDEZ, BALBINA VIRGINIA (Member)
VAZQUEZ CENDON, MARIA ELENA (Chairman)
LOPEZ SOMOZA, LUCIA (Secretary)
CASAS MENDEZ, BALBINA VIRGINIA (Member)
Exploring random walks in ontology alignment with MILA
Authorship
J.M.O.C.
Double Bachelor's Degree in Informatics Engineering and Mathematics
J.M.O.C.
Double Bachelor's Degree in Informatics Engineering and Mathematics
Defense date
02.20.2026 09:15
02.20.2026 09:15
Summary
Ontology alignment identifies semantic correspondences between concepts in heterogeneous ontologies, a task that is essential for interoperability in the Semantic Web. Current approaches based on language models achieve high accuracy, but querying the model is costly for ambiguous cases that cannot be resolved through semantic similarity. This work proposes complementing semantic information with structural information using random walks. A graph is constructed that unifies the source and target ontologies, connecting them through previously identified high-confidence correspondences. The hypothesis is that two concepts from different ontologies are likely equivalent if a random walker starting from one frequently reaches the other, indicating proximity within the graph structure. Two algorithms (Biased Random Walks and Random Walks with Restart) were implemented, integrated into the MILA system, and evaluated on five tasks from OAEI 2024: three from the Bio-ML track, one from the biodiversity domain, and one from the anatomy domain. The results show that structural information adds value selectively: it improves the candidate ranking for certain ontology pairs where semantic similarity is weakly discriminative, but it can introduce noise when semantic signals are already strong. Advisor mode (re-ranking) is more robust than predictor mode (autonomous decision-making), and Biased Random Walks outperform Random Walks with Restart. We conclude that random walks are a complement -not a substitute- for semantic techniques, and that their benefit depends on the structural characteristics of the ontology pair.
Ontology alignment identifies semantic correspondences between concepts in heterogeneous ontologies, a task that is essential for interoperability in the Semantic Web. Current approaches based on language models achieve high accuracy, but querying the model is costly for ambiguous cases that cannot be resolved through semantic similarity. This work proposes complementing semantic information with structural information using random walks. A graph is constructed that unifies the source and target ontologies, connecting them through previously identified high-confidence correspondences. The hypothesis is that two concepts from different ontologies are likely equivalent if a random walker starting from one frequently reaches the other, indicating proximity within the graph structure. Two algorithms (Biased Random Walks and Random Walks with Restart) were implemented, integrated into the MILA system, and evaluated on five tasks from OAEI 2024: three from the Bio-ML track, one from the biodiversity domain, and one from the anatomy domain. The results show that structural information adds value selectively: it improves the candidate ranking for certain ontology pairs where semantic similarity is weakly discriminative, but it can introduce noise when semantic signals are already strong. Advisor mode (re-ranking) is more robust than predictor mode (autonomous decision-making), and Biased Random Walks outperform Random Walks with Restart. We conclude that random walks are a complement -not a substitute- for semantic techniques, and that their benefit depends on the structural characteristics of the ontology pair.
Direction
TABOADA IGLESIAS, MARÍA JESÚS (Tutorships)
TABOADA IGLESIAS, MARÍA JESÚS (Tutorships)
Court
Barro Ameneiro, Senén (Chairman)
GARCIA FERNANDEZ, JULIAN (Secretary)
LADRA GONZALEZ, MANUEL EULOGIO (Member)
Barro Ameneiro, Senén (Chairman)
GARCIA FERNANDEZ, JULIAN (Secretary)
LADRA GONZALEZ, MANUEL EULOGIO (Member)
Study of incompressible fluids in laminar flow. Applications to simple cases.
Authorship
M.P.V.
Bachelor of Mathematics
M.P.V.
Bachelor of Mathematics
Defense date
02.13.2026 10:30
02.13.2026 10:30
Summary
This project studies incompressible newtonian fluids in steady laminar flow, combining analytical solutions and numerical simulations with COMSOL Multiphysics. Three classical cases are considered: flow between two stationary parallel plates (Poiseuille flow), flow between two parallel plates induced by the movement of the upper plate (Couette flow), and flow inside a circular pipe (Hagen-Poiseuille flow). For each case, theoretical expressions for velocity profiles and flow rates are obtained and compared with numerical results. The comparison shows excellent correlation, confirming the accuracy of the finite element method for basic fluid mechanics problems. The mesh refinement and the selection of the boundary conditions on simulation accuracy are also discussed, and it is confirmed that comparisons with theoretical solutions are meaningful only in fully developed flow regions (far from the inlet). Finally, a more complex case is analyzed: flow in curved pipes, where secondary flows such as Dean vortices appear, which cannot be described analytically. This highlights the usefulness of numerical tools for studying complex geometries and the benefits of combining analytical and computational approaches in fluid mechanics.
This project studies incompressible newtonian fluids in steady laminar flow, combining analytical solutions and numerical simulations with COMSOL Multiphysics. Three classical cases are considered: flow between two stationary parallel plates (Poiseuille flow), flow between two parallel plates induced by the movement of the upper plate (Couette flow), and flow inside a circular pipe (Hagen-Poiseuille flow). For each case, theoretical expressions for velocity profiles and flow rates are obtained and compared with numerical results. The comparison shows excellent correlation, confirming the accuracy of the finite element method for basic fluid mechanics problems. The mesh refinement and the selection of the boundary conditions on simulation accuracy are also discussed, and it is confirmed that comparisons with theoretical solutions are meaningful only in fully developed flow regions (far from the inlet). Finally, a more complex case is analyzed: flow in curved pipes, where secondary flows such as Dean vortices appear, which cannot be described analytically. This highlights the usefulness of numerical tools for studying complex geometries and the benefits of combining analytical and computational approaches in fluid mechanics.
Direction
GOMEZ PEDREIRA, MARIA DOLORES (Tutorships)
GOMEZ PEDREIRA, MARIA DOLORES (Tutorships)
Court
VAZQUEZ CENDON, MARIA ELENA (Chairman)
LOPEZ SOMOZA, LUCIA (Secretary)
CASAS MENDEZ, BALBINA VIRGINIA (Member)
VAZQUEZ CENDON, MARIA ELENA (Chairman)
LOPEZ SOMOZA, LUCIA (Secretary)
CASAS MENDEZ, BALBINA VIRGINIA (Member)
Introduction to domain decomposition methods
Authorship
L.R.L.
Bachelor of Mathematics
L.R.L.
Bachelor of Mathematics
Defense date
02.13.2026 11:15
02.13.2026 11:15
Summary
In this paper, the theoretical ideas of domain decomposition and some of its applications will be presented, with the aim of showing their importance in both theoretical and practical contexts. The basic principles of these methods will be explained and their operation illustrated through representative examples, which include one-dimensional non-overlapping case, multi-dimensional non-overlapping case, one-dimensional overlapping case, two-dimensional overlapping case and two dimensional overlapping case under realistic boundary conditions. These examples have been chosen with the objective of facilitating an understanding of the principles of domain decomposition and highlighting their usefulness in solving problems of applied interest. Furthermore, the corresponding MATLAB codes associated with each of the examples discussed will be included, allowing the results obtained to be reproduced and facilitating a practical understanding of the algorithms studied.
In this paper, the theoretical ideas of domain decomposition and some of its applications will be presented, with the aim of showing their importance in both theoretical and practical contexts. The basic principles of these methods will be explained and their operation illustrated through representative examples, which include one-dimensional non-overlapping case, multi-dimensional non-overlapping case, one-dimensional overlapping case, two-dimensional overlapping case and two dimensional overlapping case under realistic boundary conditions. These examples have been chosen with the objective of facilitating an understanding of the principles of domain decomposition and highlighting their usefulness in solving problems of applied interest. Furthermore, the corresponding MATLAB codes associated with each of the examples discussed will be included, allowing the results obtained to be reproduced and facilitating a practical understanding of the algorithms studied.
Direction
ALVAREZ DIOS, JOSE ANTONIO (Tutorships)
ALVAREZ DIOS, JOSE ANTONIO (Tutorships)
Court
VAZQUEZ CENDON, MARIA ELENA (Chairman)
LOPEZ SOMOZA, LUCIA (Secretary)
CASAS MENDEZ, BALBINA VIRGINIA (Member)
VAZQUEZ CENDON, MARIA ELENA (Chairman)
LOPEZ SOMOZA, LUCIA (Secretary)
CASAS MENDEZ, BALBINA VIRGINIA (Member)
Geometric Aspects of the Theory of Relativity
Authorship
M.S.A.
Bachelor of Mathematics
M.S.A.
Bachelor of Mathematics
Defense date
02.12.2026 18:00
02.12.2026 18:00
Summary
The birth of special relativity generated a rupture with deeply established classical concepts such as absolute space and absolute time. The aim of this work will be to deepen into this theory while leaving the mathematical language prepared in order to later introduce general relativity. To this end, throughout the first chapter an introduction to the context prior to the birth of the theory will be carried out and the mechanisms to understand why it appears and its consequences will be developed, ending with an analysis of relativistic effects such as time dilation or Lorentz contraction. Subsequently, we will carry out a deep analysis of the mathematical language that is used both in special and general theory. We will study tensors, semi-Riemannian geometry and particularly Lorentzian geometry. Finally, the last chapter will seek to unify the previous two. We will use the mathematical language established throughout chapter two to formalize results studied in the first chapter adding some new ones.
The birth of special relativity generated a rupture with deeply established classical concepts such as absolute space and absolute time. The aim of this work will be to deepen into this theory while leaving the mathematical language prepared in order to later introduce general relativity. To this end, throughout the first chapter an introduction to the context prior to the birth of the theory will be carried out and the mechanisms to understand why it appears and its consequences will be developed, ending with an analysis of relativistic effects such as time dilation or Lorentz contraction. Subsequently, we will carry out a deep analysis of the mathematical language that is used both in special and general theory. We will study tensors, semi-Riemannian geometry and particularly Lorentzian geometry. Finally, the last chapter will seek to unify the previous two. We will use the mathematical language established throughout chapter two to formalize results studied in the first chapter adding some new ones.
Direction
DOMINGUEZ VAZQUEZ, MIGUEL (Tutorships)
DOMINGUEZ VAZQUEZ, MIGUEL (Tutorships)
Court
GARCIA RIO, EDUARDO (Chairman)
GARCIA LUCAS, DIEGO (Secretary)
SANCHEZ SELLERO, CESAR ANDRES (Member)
GARCIA RIO, EDUARDO (Chairman)
GARCIA LUCAS, DIEGO (Secretary)
SANCHEZ SELLERO, CESAR ANDRES (Member)
Dahlquist's theorem
Authorship
M.C.S.M.
Bachelor of Mathematics
M.C.S.M.
Bachelor of Mathematics
Defense date
02.12.2026 11:00
02.12.2026 11:00
Summary
Dahlquist's theorem is a fundamental result in the theory of multistep linear methods (MLM) for the numerical solution of ordinary differential equations. It states that a multistep linear method is stable if and only if it satisfies the so-called root condition, formulated in terms of the characteristic polynomial associated with the method. In the course Numerical Methods in Optimization and Differential Equations, this result is stated, and only the necessity of this condition is demonstrated. The main objective of this Bachelor's Thesis is to present a proof of the sufficiency of the root condition for the stability of multistep linear methods. To this end, an approach is adopted based on reformulating a multistep method as a single-step method defined in a higher-dimensional space, with a matrix coefficient. This approach allows the study of stability to be reduced to analyzing the behavior of the powers of certain square matrices and characterizing those whose powers remain bounded. The work also includes a review of the concepts of consistency, stability, and convergence within the framework of multistep linear methods, as well as the application of Dahlquist's theorem to specific multistep methods, in order to illustrate the usefulness of the theoretical results developed.
Dahlquist's theorem is a fundamental result in the theory of multistep linear methods (MLM) for the numerical solution of ordinary differential equations. It states that a multistep linear method is stable if and only if it satisfies the so-called root condition, formulated in terms of the characteristic polynomial associated with the method. In the course Numerical Methods in Optimization and Differential Equations, this result is stated, and only the necessity of this condition is demonstrated. The main objective of this Bachelor's Thesis is to present a proof of the sufficiency of the root condition for the stability of multistep linear methods. To this end, an approach is adopted based on reformulating a multistep method as a single-step method defined in a higher-dimensional space, with a matrix coefficient. This approach allows the study of stability to be reduced to analyzing the behavior of the powers of certain square matrices and characterizing those whose powers remain bounded. The work also includes a review of the concepts of consistency, stability, and convergence within the framework of multistep linear methods, as well as the application of Dahlquist's theorem to specific multistep methods, in order to illustrate the usefulness of the theoretical results developed.
Direction
MUÑOZ SOLA, RAFAEL (Tutorships)
MUÑOZ SOLA, RAFAEL (Tutorships)
Court
MUÑOZ SOLA, RAFAEL (Student’s tutor)
MUÑOZ SOLA, RAFAEL (Student’s tutor)
Epidemic models using ordinary differential equations
Authorship
D.V.M.
Bachelor of Mathematics
D.V.M.
Bachelor of Mathematics
Defense date
02.13.2026 14:30
02.13.2026 14:30
Summary
The objective of this work is the mathematical study of the SIR model in the field of epidemiology. This model is based on a system of differential equations used to study how an epidemic evolves over time and to understand the spread dynamics within a population. The document begins with a historical overview to contextualize the importance of studying the model. Subsequently, different variants are analyzed, paying special attention to the SI and SIS models. Next, the SIR model and its mathematical properties are studied in detail, analyzing the behavior of the curves for susceptible, infected, and recovered individuals. This is key to determining, for example, whether a disease will die out on its own or if it will grow until it reaches a peak of infection. To conclude the work with a practical validation, we will examine the case of the 1978 flu outbreak and a novel approach called Digital Twins.
The objective of this work is the mathematical study of the SIR model in the field of epidemiology. This model is based on a system of differential equations used to study how an epidemic evolves over time and to understand the spread dynamics within a population. The document begins with a historical overview to contextualize the importance of studying the model. Subsequently, different variants are analyzed, paying special attention to the SI and SIS models. Next, the SIR model and its mathematical properties are studied in detail, analyzing the behavior of the curves for susceptible, infected, and recovered individuals. This is key to determining, for example, whether a disease will die out on its own or if it will grow until it reaches a peak of infection. To conclude the work with a practical validation, we will examine the case of the 1978 flu outbreak and a novel approach called Digital Twins.
Direction
Nieto Roig, Juan José (Tutorships)
Nieto Roig, Juan José (Tutorships)
Court
Nieto Roig, Juan José (Student’s tutor)
Nieto Roig, Juan José (Student’s tutor)