# Applied Math Seminar

### Previous Lectures

In our research we consider the linear second order differential equation with delay and impulses. We build Green's functions for the two-point boundary conditions problem. Using Green's functions we find necessary and sufficient conditions of positivity of Green's functions for this impulsive equation coupled with two-point boundary conditions in the form of theorems about differential inequalities.

We propose a numerical algorithm for calculation of the matrix exponential, which is stable for every matrix and any number of required signficiant digits. The algorithm is based on the Lanczos method of eigenvalue calculation. Theoretical analysis and proof of stability of the algorithm is given.

Joint work with Shlomo Yanetz and Gregory Agranovich (Ariel)

We consider motion of inertial particles in random (turbulent) flow. Inertia of particles causes delay:

the particle's velocity is not the local flow but the flow at the trajectory some time ago. We demonstrate that this causes particles' clustering on fractal set. This has uses in rain prediction problem and industry.

Our body is colonized by trillions of microbes, known as the human microbiome, living with us in a complex ecological system. Those micro-organisms play a crucial role in determining our health and well-being, and there are ongoing efforts to develop tools and strategies to control these ecosystems. In this talk I address a simple but fundamental question: are the microbial ecosystems in different people governed by the same host-independent ecological principles, represented by a characteristic (i.e. universal) mathematical model? Answering this question determines the feasibility of general therapies and control strategies for the human microbiome. I will introduce our novel methodology that distinguishes between two scenarios: host-independent and host-specific underlying dynamics. This methodology has been applied to study different body sites across healthy subjects. We also analyzed the gut microbial dynamics of subjects with recurrent Clostridium difficile infection (rCDI) and the same set of subjects after fecal microbiota transplantation (FMT). The results can fundamentally improve our understanding of forces and processes shaping human microbial ecosystems, paving the way to design general microbiome-based therapies.

The most precise determination of the proton charge radius, from a novel muonic hydrogen spectroscopy experiment, disagrees with previous spectroscopy and scattering experiments done with electrons.

This 7-sigma discrepancy is known as "proton radius puzzle", and may be the result of hitherto unknown physics.

In order to investigate it, experiments with other muonic atoms have been conducted.

These experiments rely on accurate theoretical predictions.

In particular, their precision is limited by the nuclear corrections.

We have calculated these corrections for muonic atoms with A=3,4 nucleons, for the first time using ab-initio methods and state-of-the-art nuclear potentials, significantly improving previous estimates, and contributed also to the A=2 case.

This was achieved using a newly developed method, based on the Lanczos algorithm, for the calculation of energy-dependent sum-rules.

Our new method and our results will be presented and discussed.

In this lecture we present an improvement to the running time of the Philip Klein and R.Ravi approximation algorithm. The Philip-Klein algorithm produces a Steiner Tree that is close to the minimal one, by preforming iterations repeatedly. The iteration implementation is based on the distances between the nodes of the graph. The main hypothesis that led to our improvement is that there is no need to find all the distances in the graph but only a part of them. In addition, an example of the algorithm's implementation will be shown.

Joint work with Eli Packer, IBM, and Shlomo Yanetz

The brain contains billions of neurons each connected to several thousand other neurons. The voltage recorded over the scalp/skull is generated by activity of large population of neurons. Different recorded amplitudes at different states of vigilance are attributed to differences in synchrony level among neurons and different statistical structures of the population; however the relation between the signals and the statistical characteristics of the underlying neural activity is still an open question. We developed a model based on multidimensional stationary stochastic processes to resolve the statistical organization properties of neural assemblies. We showed that despite the many possible options for statistical organizations only very few are mathematically plausible.

Transmission rates in broadband optical waveguide systems are enhanced by launching

many pulse sequences through the same waveguide. Since pulses from different sequences

propagate with different group velocities, intersequence pulse collisions are frequent, and can lead

to severe transmission degradation. On the other hand, the energy exchange in pulse collisions can

be beneficially used for controlling the transmission.

In this work we show that collision-induced amplitude dynamics of soliton sequences of N

perturbed coupled nonlinear Schrödinger (NLS) equations can be described by N-dimensional

Lotka-Volterra (LV) models, where the model's form depends on the perturbation. To derive the LV

models, we first carry out single-collision analysis, which is based on the method of eigenmode

expansion with the eigenmodes of the linear operator describing small perturbations about the

fundamental NLS soliton. We use stability and bifurcation analysis for the equilibrium points of the

LV models to develop methods for achieving robust transmission stabilization and switching that

work well for a variety of waveguides. Further enhancement of transmission stability is obtained in

waveguides with a narrowband Ginzburg-Landau gain-loss profile. We also discuss the possibility

to use the relation between NLS and LV models to realize transition to spatio-temporal chaos with

NLS solitons.

Delays, arising in nonoscillatory and stable ordinary differential equations, can induce oscillation and instability of their solutions. That is why the traditional direction in the study of nonoscillation and stability of delay equations is to establish a smallness of delay, allowing delay differential equations to preserve these convenient properties of ordinary differential equations with the same coefficients. In this talk, we find cases in which delays, arising in oscillatory and asymptotically unstable ordinary differential equations, induce nonoscillation and stability of delay equations. We demonstrate that, although the ordinary differential equation x"(t)+c(t)x(t)=0 can be oscillating and asymptoticaly unstable, the delay equation x"(t)+a(t)x(t-h(t))-b(t)x(t-g(t))=0, where c(t)=a(t)-b(t), can be nonoscillating and exponentially stable. Results on nonoscillation and exponential stability of delay differential equations are obtained. On the basis of these results on nonoscillation and stability, the new possibilities of non-invasive (non-evasive) control, which allow us to stabilize a motion of single mass point, are proposed. Stabilization of this sort, according to common belief requires damping term in the second order differential equation. Results obtained in this paper refute this delusion.

Delays, arising in nonoscillatory and stable ordinary differential equations, can induce oscillation and instability of their solutions. That is why the traditional direction in the study of nonoscillation and stability of delay equations is to establish a smallness of delay, allowing delay differential equations to preserve these convenient properties of ordinary differential equations with the same coefficients. In this talk, we find cases in which delays, arising in oscillatory and asymptotically unstable ordinary differential equations, induce nonoscillation and stability of delay equations. We demonstrate that, although the ordinary differential equation x"(t)+c(t)x(t)=0 can be oscillating and asymptoticaly unstable, the delay equation x"(t)+a(t)x(t-h(t))-b(t)x(t-g(t))=0, where c(t)=a(t)-b(t), can be nonoscillating and exponentially stable. Results on nonoscillation and exponential stability of delay differential equations are obtained. On the basis of these results on nonoscillation and stability, the new possibilities of non-invasive (non-evasive) control, which allow us to stabilize a motion of single mass point, are proposed. Stabilization of this sort, according to common belief requires damping term in the second order differential equation. Results obtained in this paper refute this delusion.

Supervised learning is based predominantly on labeled examples which are often expensive and scarce. An alternative form of supervision is equivalence constraints, i.e. two examples which are known to be from the same/different classes, yet their class labels are unknown. Equivalence constraints are often easier and cheaper to obtain, but the theoretical underpinnings of their learning utility relative to labels is still lacking. In this work we develop novel framework for analyzing the learning utility of equivalence constraints. Specifically, we extend the statistical mechanics Perceptron capacity calculations, used thus far only for labeled data, to supervised learning from equivalence constraints. We then derive generalization bounds for training with equivalence constraints, using a link between Perceptron capacity and Rademacher complexity. We prove that for large sample sizes, a sample with EC supervision becomes as powerful as a fully labeled sample of the same size. We also prove that this result holds even when the examples in the constraints are highly correlated.

RNA is a two-level language. The first level is the language of RNA sequence. The second level is the RNA structures and their biological roles. The seminar will discuss new computational approaches to study the roles RNA structures may play controlling gene expression level, temperature adaptation, bacterial and viral evolution.

A “regenerator” is a special purpose counter-flow heat exchanger used to recover waste heat from exhaust gases. In such heat exchangers the energy storage medium is alternately heated by hot combustion products and cooled by the air supplied to the combustion chamber. This type of heat exchanger can have a thermal efficiency of over 90%, transferring almost all the relative heat energy from one flow direction to the other. The study is aimed at the development of an efficient regenerative system for gas turbine engines. The proposed design is based on static chambers regenerator with porous ceramic foam as heat transfer/storage media. A numerical model was developed for theoretical analysis and identification of the parameters controlling the performance of a regenerator. The pressure drop and the heat transfer efficiency were calculated and compared for two porous media types: foam type and squared honeycomb.

Many problems in science and engineering involve, as part of their solution process, the consideration of a minimization of a composite function F=f+g where f is smooth, g possibly not, and both are convex. The talk will discuss, in generalized settings, two proximal forward-backward algorithms aiming at solving this task. The first is FISTA, a popular accelerated method suggested by Beck-Teboulle. We consider it in Hilbert spaces and allow error terms which satisfy a certain decay rate. The notion of inexactness we discuss seems to be simpler than the ones discussed in related works but, interestingly, very similar decay rates of the error terms yield very similar non-asymptotic convergence rates (in the function values). Our derivation also sheds some light on the somewhat mysterious origin of some relevant parameters. In the second method, which is non-accelerated, the setting is closed and convex subsets of reflexive Banach spaces where the proximal operation is based on a (strongly convex) Bregman divergence. Now, in contrast to previous works, the gradient of f may not be globally Lipschitz continuous. Under certain assumptions a non-asymptotic rate of convergence is established, as well as weak convergence of the whole sequence.

This is a joint work with Alvaro De Pierro

In this talk I will present a lower bound for the ADM mass given in terms

of the angular momenta and charges of black holes present in axisymmetric

initial data sets for the Einstein-Maxwell equations. This generalizes the

mass-angular momentum-charge inequality obtained by Chrusciel and Costa to

the case of multiple black holes. We also weaken the hypotheses used in the

proof of this result for single black holes, and establish the associated

rigidity statement. The proof uses an existence result for harmonic maps

with prescribed singularities.

This is joint work with Marcus Khuri

http://arxiv.org/abs/1502.06290

We consider the problem of efficiently covering a domain by unit discs. This problem has applications in optimal cellular antennae placement, facility location any many other similar problems. Our main interest is a result by W. Blaschke, which determines an upper bound to the number of unit discs which are needed to cover a given convex domain . Blaschke showed that a domain can be covered with 2A/3 sqrt(3) + 2L/ pi sqrt(3) + 1 (1) unit circles, where *A* is the area of the given domain and *L* the perimeter. This result is due to the properties of the hexagonal lattice. This talk will be composed of three main results. First, we will show that in special cases Blaschke's result can be improved and then show how to locate the hexagonal lattice in these cases. Second, we will give a sufficient condition under which (1) can be improved. Third, we will give an algorithmic approach which determines the exact position of the hexagonal lattice, such that the number of unit hexagons (in the hexagonal lattice) which hit the domain is minimized.

Origami is the traditional Japanese art of paper folding. In the past 30 years investigations into folding properties have not only resulted in many stunning models, but also a surprising number of applications. In this talk we will provide an introduction to some of the mathematics of folding, including various theoretical notions of what sorts of folds are possible.

(OFDM = Orthogonal frequency-division multiplexing)

We propose a novel type of CO-OFDM based on the recently developed dual-tree complex WPT (DT-CWPT). In particular, polarization mode dispersion (PMD) can be compensated by digital signal processing using a DT-CWPT which is characterized by a single-side band. Numerical simulations show that the 1 Tb/s single-channel CO-OFDM transmission over the distance of 1800 km with the spectral efficiency (SE) of 7.88 bit/s/Hz can be realized.

Joint work with D. Brodeski, B.I. Lembrikov

This talk will present a new framework for quantification of the coupling and interdependences between different financial markets. The employment of ideas and techniques from complexity science and the proposed theory of coupled and interdependent networks to understand and quantify the role of connections and dependencies within a system and between different ones opens the possibility to manage the complexity, optimize the systems and reduce their vulnerability to failures. More specifically, we investigate the stock-stock correlations in individual markets as local market dynamics, and the correlation of correlations, meta-correlations, which represents global market dynamics. Furthermore, we make use of the recently introduced dependency network methodology, which enables a quantification of the influence relationships between the different markets. The methodologies presented provide the means to track the flow of information between different markets, and can be used to identify changes in correlations in strongly coupled markets. Finally, we will discuss different applications of network science in finance and economics, which demonstrate who one can use empirical financial data to construct a network that represents the financial system, and then use it to study different aspects such as structure, dynamics and stability.

The world has become a global village, and this village is becoming smaller and smaller, with the continuous introduction of ways to interact and connect to other people. Thus, the methodology outlined in this talk will provide new tools and means to quantify, characterize and manage the complexity of the world’s economy. The methodologies presented here can be used as the basis for quantitative early warning tool, a “financial seismograph”, which will provide policy makers the necessary precursors for significant local and global economic events.

Commercial GNSS devices tend to perform poorly in urban canyon environments. The dense and tall buildings block the signals from many of the satellites. In this talk, we present a particle filter algorithm for Shadow Matching framework to face this problem. Given a 3D city map and given the

satellites' signal properties, the algorithm calculates in real-time invalid regions inside the Region Of Interest (ROI). This approach reduces the ROI to a fraction of its original size. We present a general framework for Shadow Matching positioning algorithm based on a modified particle filter. Using simulation experiments we have shown that the suggested method can improve the accuracy of existing GNSS devices in urban regions. Moreover, the proposed algorithm can be efficiently extended to 3D positioning in high sampling rate, inherently applicable for UAVs and Drones.

The recent years have seen spectacular advances in our understanding of the structure of complex networks, providing detailed maps of social and technological systems, cellular networks and food webs. The ultimate goal of these efforts is to be able to translate these topological findings into dynamical predictions on the system's observable behavior. However, our progress in this direction is hindered by a crucial lacuna: *the absence of microscopic models that describe the dynamics of many of the relevant complex systems*. The challenge is that these systems are, in effect, a black box. We can observe their macroscopic behavior, e.g., track the spread of an epidemic, but we have no direct access to the microscopic exchanges taking place between the nodes, i.e. the dynamical model that most accurately describes the processes of infection and recovery. Metaphorically, the task of unveiling these microscopic dynamics, is equivalent with an attempt to recover the structure of a car's engine directly from observations of its macro-scale behavior, having no direct access to what is under the hood. Hence we developed a reverse engineering method to infer the microscopic dynamics of a complex system directly from observations of its response to external perturbations. The formalism allows us to construct the most general class of continuum models that are consistent with the observed behavior.

We consider a second order delay differential equation with impulses. In this paper we find necessary and sufficient conditions of positivity of Green's functions for this impulsive equation coupled with one or two-point boundary conditions in the form of theorems about differential inequalities. By choosing the test function in these theorems, we obtain simple sufficient conditions.

The main conclusion of more than 50 years of evolution of the theory of social choice (Mueller, 2000) is as follows: in the world constructed according to Arrow’s model of social choice only diﬀerent forms of collective oppression can exist. Weale (Theory of Choice, 1992) gives the following vision of an alternative model of social choice: "An alternative model of collective choice would be most likely to present it not as a process of preference aggregation, in which there is a mapping from a set of individual orderings to a social ordering, but as a process of dialog in which reasons are exchanged between participants in a process that is perceived to be a joint search for a consensus".

In this report I aim at construction of this alternative model of social choice based on the value-powered exchange of economic or symbolic goods. I demonstrate below that under some natural hypotheses about individual demand and supply functions of goods, the social consensus is possible, i.e. there exist stable stationary points in multivariate systems of social exchange of economic or symbolic goods. These stable stationary points are interpreted as the social consensus points in dialogic (or poly-logic) processes of social choice.

Real world networks are characterized by common features, including among others a scale free degree distribution, a high clustering coefficient and a short typical distance between nodes. These properties are usually explained by the dynamics of edge and node addition and deletion.

We here propose to combine the dynamics of the nodes content and of the edges addition and deletion, using a threshold automata framework. Within this framework, we show that the typical properties of real world networks can be reproduced with a Hebbian approach, in which nodes with similar internal dynamics have a high probability of being connected. The proper network properties emerge only if an imbalance exists between excitatory and inhibitory connections, as is indeed observed in real networks.

We further check the plausibility of the suggested mechanism by observing an evolving social network and measuring the probability of edge addition as a function of similarity between contents of the corresponding nodes. We indeed find that similarity between nodes increases the emergence probability of a new link between them.

Microbial communication by ‘quorum sensing’ (QS) systems, where microbes produce and respond to a signaling molecule, enable cells to sense their local density and coordinate a cooperative response to their environment. Many QS systems show intraspecific divergence in terms of specificity, where a signaling molecule from one strain activates its cognate receptor but fails to activate those of other strains in the same species. It is unclear how can a signaling molecule and its receptor co-evolve and what evolutionary forces maintain this divergence. In this lecture I will present a mathematical model and experimental results that explain how such divergence can occur based on social grounds. Briefly, if QS regulates the secretion of public goods, which benefit the community at a cost to the producer, then divergent QS receptor mutants will invade their ancestral population by exploitation, but will subsequently be invaded by a fully divergent signal-receptor mutant through social manipulation. Experimentally, we utilized both natural and synthetic QS-dependent social traits to establish a social selection system in the QS-divergent model bacterium *Bacillus subtilis.* Using competition assays we find that the predicted scheme of divergence is verified in both well mixed and structured environments. These results demonstrate the complexity of social interactions and their evolutionary outcomes in the simplest organisms.

Animals' ability to demonstrate both stereotyped and adaptive locomotor behavior is largely dependent on the interplay between centrally-generated motor patterns and the sensory inputs that shape them. Theoretical predictions suggest that the degree to which sensory feedback is used for coordinating movement depends on the specific properties of the movement and the environment; i.e when animals navigate slowly through a complex environment where great precision is required, motor activity is expected to be mostly modulated by neural reflexes and sensory information. In contrast, during fast running or under noisy conditions, the relatively slow neural processing makes feedback-based coordination unlikely.

Our research attempts focus on studying the relative importance of central coupling of pattern generating networks vs. intersegmental afferents for locomotion in the cockroach, an animal that is renowned for rapid and stable running. In order to do so, we combine neurophysiological and behavioral experiments with simulations of stochastic models of coupled oscillators. Specifically, we record neural activity patterns and monitor behavior of preparations whose legs movements are controlled and manipulated. The recorded traces are then compared with model generated activity to estimate underlying physiological parameters using maximum likelihood techniques. Our findings suggest segmental hierarchies, speed-dependent control and provide insights to how sensory information from a moving leg dynamically modulates centrally generated patterns. I will discuss these and suggest movement-based feedback in cockroach locomotion as a model system to study the bidirectional interactions between motor control and sensory processing in general.

The Dead Sea basin offers a unique site to study the attenuation of solar ultraviolet radiation, as it is situated at the lowest terrestrial point on the earth, about 400 m below sea level. In view of its being an internationally recognized center for photoclimatherapy of various skin diseases, it is of interest to study both its UV intensity and attenuation as a function of wavelength relative to other sites. In order to provide a basis for inter-comparison of the solar radiation intensity parameters measured at the Dead Sea, a second set of identical parameters are measured simultaneously at a second site (Beer Sheva), located at a distance of ca. 65 km to the west and situated above sea level. The existing database consists of measurements from January 1995 to the present. The results of this on-going research project will be presented and the relevance of these findings with regard to the success of photoclimatherapy at the Dead Sea medical spas.

In addition, the broad-band normal incidence UVB beam irradiance has been measured at Neve Zohar, Dead Sea basin, using a prototype tracking instrument composed of a Model 501A UV-Biometer mounted on an Eppley Solar Tracker Model St-1. The application of the results of these measurements to the photoclimatherapy protocol for psoriasis patients at the Dead Sea medical spas is now under investigation. The suggested revision would take advantage of the very high diffuse fraction by allowing the patient to receive the daily dose of UVB irradiance without direct exposure to the sun, viz., receive the diffuse UVB irradiance under a sunshade. This would require an increase in sun-exposure time intervals, since the UVB irradiance intensity beneath a sunshade is less than that on an exposed surface.

It is shown that an Aharonov-Bohm (AB) effect exists in magnetohydrodynamics (MHD). This effect is best described in terms of the MHD variational variables [1, 2]. If a MHD flow has a non trivial topology some of the functions appearing in the MHD Lagrangian are non-single valued. These functions have properties similar to the phases in the AB celebrated effect. While the manifestation of the quantum AB effect is in interference fringe patterns, the manifestation of the MHD Aharonov-Bohm effects are through new dynamical conservation laws which also serve as local stability bounds.

We propose a novel approach of distance-based spatial clustering and contribute a heuristic computation of input parameters for guiding users in the search of interesting cluster constellations. Our approach entails displaying the results of the heuristics to users, providing a setting to start the exploration from. We provide in addition interaction capabilities with visual feedback for exploring further clustering options and the ability to cope with noise in the data. We

evaluate our approach on a sophisticated artificial dataset and demonstrate its usefulness on real-world data. Our evaluations reveal the performance and behavior of our approach under different conditions and prove beneficial for exploring complex clusters in sets of data.

Joint work with Peter Bak, Mikko Nikkila, Valentin Polishchuk, and Harold J. Ship

Early identification of cancer is key to preventing metastasis and improving patient survival. The more sensitive a diagnostic tool, and the more information it provides on the potential susceptibility of disease cells to specific therapies, the better the chances of delivering a successful treatment regimen to individual patients. The phenomenon of chimeric RNA transcripts (i.e., fusion of two separate transcripts) in both normal and disease tissues has been well established, however, in only a few exceptions has abnormal function been associated. To identify fusion transcripts that contribute to pathogenesis or that can aid in diagnosis, I have built mathematical tools for analyzing the enormous amount of data deriving from new RNA sequencing technologies that have been applied to cancer analysis. I have found that transcript fusion events are common events in cancer, and may be useful in diagnosing cancer and selecting the most effective therapeutic strategy for individual patients. Chimeric transcripts often give rise to the expression of fusion proteins, which now can interact with a novel combination of protein partners, often combining many of the partner proteins of the two parent polypetides, as well as making new interactions with as yet unidentified partners. I have developed a systematic method based on computer algorithms and mathematical modeling for identifying significant changes to the Protein Protein Interaction (PPI) Network that occurs upon the appearance of a novel fusion protein. My goal is to map the PPI networks of cancer-associated fusion proteins and their association with cancer-related metabolic profiles using the graph theory and stochastic models, in order to uncover novel onco-genes, signaling pathways, and up- stream or downstream kinases that could be inhibited as a part of a personalized anti-cancer therapeutic regimen.

A brief introduction to computational chemistry.

Results of combined experimental and computational research, using G09 program, are shown:

1.The mechanism of the reaction between hydrogen peroxide and Co(II).

2.The partial charge on Cu(III) in Cu(CO_{3})_{2}

It is shown that an Aharonov-Bohm (AB) effect exists in magnetohydrodynamics (MHD). This effect is best described in terms of the MHD variational variables. If a MHD flow has a non trivial topology some of the functions appearing in the MHD Lagrangian are non-single valued. These functions have properties similar to the phases in the AB celebrated effect. While the manifestation of the quantum AB effect is in interference fringe patterns, the manifestation of the MHD Aharonov-Bohm effects are through new dynamical conservation laws which also serve as local stability bounds.

High throughput DNA sequencing has transformed the landscape of genomic data and is expected to revolutionize our knowledge of evolution and genomic function. However, the abundant sequence data also poses serious computational challenges, and realizing its full potential requires developing efficient and reliable computational and statistical inference methods. This talk will cover work that I have done as part of my postdoctoral research, utilizing newly emerging genomic

data sets and population genetic models to examine several open questions in evolution. I will start by describing a study I conducted of ancient human population demography in Africa, focusing on one of the deepest population divergence events in human history, dating roughly 130 thousand years ago. I will then present work I did as part of a large-scale collaborative effort to study the early evolution of dogs using the complete genome sequences of two dogs and three gray wolves. I will show how we were able to settle several longstanding debates revolving around the origins of dogs using these genomes and an innovative computational approach I developed. Lastly, I will describe a parallel line of research I have been recently conducting, trying to understand the evolutionary roles of non coding regulatory elements in the human genome. The talk will describe the computational challenges involved in these three studies. I will outline the methods developed to address these challenges, and present the main findings and their significance. I will conclude with a short survey of my ongoing research, and a map of the opportunities and challenges we face in the study of evolution in a world of rapidly evolving genomic data sets.

Note: the talk does not require any prior biological knowledge.

Both averaging and infinite horizon optimization have many applications in physics, engineering and operations research, we present new results in both fields, as well as in averaged shape optimization.

I will overview the on-going research of matter-wave solitons in dilute atomic gases. After a brief discussion of the experimental status and basic theory, I shall concentrate on quantum mechanical properties of these objects, namely on creation of quantum superposition and entangled states via scattering of solitons off the potential barrier. I will compare between quantum and mean-field descriptions of the system and will consider the possibility to distinguish these states from the classical statisitical mixtures.

The following questions will be considered and discussed:

Brief description of the principal facts from the theory of linear functional differential equations (FDE).

Boundary value problems (BVP) and control problems(CP) for FDE: setting up and conditions of the solvability.

Computer-assisted study of the solvability of BVP and CP.

Classes of control actions: L_2 - , impulsive, mixed controls.

Applications to dynamic models of economics

In this talk we will introduce a discrete version of the Vekua equation in elliptic complex numbers. For the case of constants coefficients we will show a discrete version of the Principle of Similarity, in which a solution can be expressed as a product in which one factor is a discrete holomorphic fuction in elliptic complex numbers.

The development of new topological and algebraic tools related to the non-linear spectral theory and generalization of complex structures in commutative non-associative algebras for solution to polynomial ODEs is proposed.

The obtained methods will be used to study qualitative behavior of homogeneous polynomial systems (including the existence of bounded/periodic solutions, existence of an algebraic first integrals, etc.).

The results may be applied to studying classical quadratic systems arising in chemistry, solid body physics and engineering.

During the past thirty years renewed interest in investigating the circulation of the Mediterranean Sea in general and specifically the eastern Mediterranean, has led to the rapid development and application of ocean models to this area. The models are used to improve our scientific understanding of the system as well as for forecasting and assessing potential environmental impacts of anthropogenic activities such as the recent exploration and exploitation of offshore gas and oil reserves. At the center of these modeling systems is the circulation model which is based on the primitive equations form of the Navier-Stokes equation. The numerical schemes are generally based on second order accurate finite differencing the Eulerian form of the equations. In this lecture a general overview will be given of ocean modeling in the eastern Mediterranean. Examples will be shown for climate scenario simulations, for an operational ocean forecasting system, and for recent downstream applications such as an ecosystem model and for an oil spill model.

The talk is a short introduction to the physics of wave processes in inhomogeneous media with fluctuating parameters. I will present the mathematical approaches commonly used in this area, and the most interesting physical results, with emphasis on the recent investigations of Anderson localization and its applications in optics and radiophysics.

We consider a large number of identical inclusions (say spherical), in a bounded domain, with conductivity different than that of the matrix. In the dilute limit, with some mild assumption on the first few marginal probability distribution (no periodicity or stationarity are assumed), we prove convergence in H1 norm of the expectation of the solution of the steady state heat equation, to the solution of an effective medium problem, which for spherical inclusions is obtained through the

Maxwell-Clausius-Mossotti formula. Error estimates are provided as well.

see attached file

The Cauchy-Kovaleskaya Theorem provides sufficient conditions for an elliptic linear equation on the plane with evolution in time to have solutions with prescribed initial value functions. That these conditions cannot be freely relaxed comes by the celebrated Lewy's example of a system with no solutions. In [3] the technique of associated operators is used to establish conditions for solvability provided the initial pair is holomorphic. This result is further generalized to the case when the initial pair is holomorphic in elliptic complex numbers in [1]. In this talk we will discuss some key aspects of this latter result and how can these be used as a tool to generalize results valid for ordinary holomorphic functions.

[1] Alayon-Solarz D., Vanegas C.J., "Operators Associated to the Cauchy-Riemann Operator in Elliptic Complex Numbers" Advances in Applied Clifford Algebras, DOI: 10.1007/s00006-011-0306-4, 2011.

[2] Lewy, H., "An example of a smooth linear partial differential equation without solution", Annals of Mathematics 66 (1): 155-158, doi:10.2307/1970121,1957.

[3] Son L. H. and Tutschke W., "First Order differential operators associated to the Cauchy-Riemann equations in the plane", Complex Variables and Elliptic Equations, Vol. 48, No. 9, pp 797-801, 2003.

This is a joint work with C.J. Vanegas

Locust swarming is an astounding natural phenomenon. Yet, our understanding of the mechanisms leading to formation of swarms and the complex interactions between the swarm and the environment are still far from complete. In recent years, these questions have been put in the broader context of collective motion, relating to macroscopic synchronization and collective behavior of large numbers of moving individuals.

I will describe a comprehensive approach for a systematic investigation of the mechanisms and principal animal-animal interactions leading to the emergence of collective behavior in marching locusts swarms from new experimental results using custom-made multiple-target tracking algorithms, a statistical analysis of the dynamics within a swarm revealing the key interactions between individuals to modeling the swarm and a multiscale analysis of its dynamics.

Joint work with Amir Ayali and Yotam Ofir (TAU Biology) and Sagi Levi (BIUMath).

The human genome contains tenth of thousands of genes that are organized in chromosomes and packed in the nucleus of the cell. How can the chromosomes and DNA that are highly dynamic stay organized in territories without any compartmentalization?

We study the organization by following the dynamics of various genetic sites using single particle tracking. The dynamics is analyzed by using diffusion models and was found to be transient anomalous diffusion. This type of diffusion can be explained by assuming that the DNA forms temporal loops through a certain mediator. We identified a candidate protein (Lamin A) and show the effect of this protein deficiency in cells. Single molecule methods that we use for studying protein-DNA interaction will also be demonstrated.

We study the stability of random scale-free networks to degree dependent attacks. We present analytical and numerical results to compute the critical fraction of nodes that need to be removed for destroying the network under this attack for different attack parameters. We study the effect of different defense strategies, based on the addition of a constant number of links on network robustness. We test defense strategies based on adding links to either low degree, mid degree or high degree nodes. We find using analytical results and simulations that the mid degree nodes defense strategy leads to the largest improvement to the network robustness against degree based attacks. We also test these defense strategies on an Internet AS map and obtain similar results.

This is a joint work with Reuven Cohen.

We consider the hydrodynamics and magnethydrodynamic models from point of view the Vekua's theory of Generalized Analytic Functions. Also, the discrete case has been recently studied. An important result of Vekua's theory of generalized analytic functions is the construction of the canonical form for uniformly and linear elliptic system of equations on the plane by solving an associated Beltrami equation. In this talk we will introduce the concept of structure polynomial of linear first order systems on the plane and show how the induced algebraic structure allows to avoid solving a Beltrami equation for a family of cases even when the system is not uniformly elliptic.

Point Location is a fundamental problem in Computational Geometry. Given a partition of the Euclidean space into disjoint areas, the problem faces the question: "In which area lies a given point?". This problem also reflects to world of data structure, because of the need to create a suitable structure that will keep the data and allow answering the question efficiently for any given point. We will introduce a method called Slab Decomposition which solves the problem in a simple and efficient way by using advanced search trees, called Persistent. Moreover, we have programmed this method and approved it to be useful.

Recent years have seen a dramatic increase in mathematical models of biological processes that are described in terms of systems of partial differential equations. In this talk I will give some examples of such models and discuss the mathematical challenges that arise in the analysis of these systems. Examples include cancer models as free boundary problems for systems of elliptic-parabolic-hyperbolic equations; a wound healing process modeled by means of Stokes equation with a free boundary, and a reaction-hyperbolic system which arises in the movement of neurofilaments in axons. Recent results and open questions will be described.

The talk's theme is the extraction of geometric information about graphs (metric or combinatorial) from the spectra of the graph's Schroedinger operators (continuous or discrete), and from the distribution of sign changes on the corresponding eigenfunctions. These include questions such as e.g., the ability to "hear the shape of the graph"; the extent to which the spectral sequence and the sequence of the number of sign changes (or number of nodal domains) complement or overlap each other; the derivation of topological information from the study of the response of the spectrum to variation of scalar or magnetic potentials on the graph, etc.

In the present talk I shall illustrate this research effort by reviewing several results I obtained recently. The first example answers the question "Can one count a tree?" which appears in the following context: It is known that the number of sign changes of the eigenfunction on tree graphs equals to the position of the corresponding eigenvalue in the spectrum minus one. Is the reverse true? If yes, one can tell a tree just by counting the number of its sign changes. For the proof I shall introduce an auxiliary magnetic field and use a very recent result of Berkolaiko and Colin de Verdiere to connect the spectrum and the number of sign changes. Next, I will discuss the band spectrum obtained by varying the magnetic phases on the graph. I will prove that the magnetic band-to-gap ratio (quality of conductance) is a universal topological quantity of a graph. This result highlights the spectral geometric importance of this invariant and sheds a new light on previous works about periodic potentials on graphs.

The talk contains content of a work in progress with Gregory Berkolaiko.

Disclaimer: This talk outlines author's personal view

Derivative pricing is especially challenging in novel and illiquid markets where pricing relies greatly on assumptions and models rather than on known flow of market prices. In the novel market of shekel bond options the estimate of implied volatility could be based on the information about other – more liquid – financial instruments in the market. Here we show relevance but not equivalence of the information from the market of shekel swap rates (volatility of swap rates) to the market of bond

prices (volatility of bond prices). An approximation of bond price implied volatility based on known yield implied volatility may be potentially useful in pricing shekel bond options. We applied numerical simulations and analyzed historical data to examine the validity of such approximation.

We investigated the effects of polydispersity of fuel droplets on the thermal explosion. The size distribution of combustible fuel droplets is approximated by corresponding continuous probability density functions (PDF). The approach was proposed three years ago in our previous works. Comparatively with the parcel method the PDF-method permits us to obtain simple and compact mathematical models. We obtained an explicit expression for the critical condition for thermal explosion. Numerical simulations demonstrate an essential dependence of the thermal explosion limit on the different probability density function type.

Classical results analysis of numerical methods is very often limited to the description of tables or graphs of isovalues. This treatment of this "low level" analysis results from the enormous mass of data to be analyzed by inappropriate tools. Our purpose is to suggest a new methodology for numerical data analysis, based on exploratory data mining techniques that have proved in other areas like in biology, medicine, marketing, advertising and communications, all producing "bulimic" data. The principle of the method is based on the constitution of databases of the entire information produced by numerical approximation of mathematical models to assess and to compare the significant differences of performance by the help of techniques like decision trees, Kohonen cards, or neural networks.

Abstract: In this talk we will discuss some optimization problems in insurance. We consider a portfolio containing heterogeneous risks. The premiums of the policyholders might not cover the amount of the payments which an insurance company pays the policyholders. When setting the

premium, this risk has to be taken into consideration. On the other hand the premium that the insured pays has to be fair. This fairness is measured by a function of the difference between the risk and the premium paid. For a given small probability of insolvency, we find the premium for each class, such that the difference function is minimized. Further results are achieved by doing the calculations in terms of utility instead of money. We find that by choosing the appropriate utilities function it is possible to derive a wide range of premium principles as the optimal solution. Finally, we expand the results to the long-run model by considering a Markov chain in order to calculate the probabilities of insolvency during the years.

(Continuation of lecture from March 18)

Bladder cancer (BC) is the most frequently occurring urological cancer and the fifth most common cancer among men, accounting for approximately 200,000 new cases worldwide annually. We developed a multi scale cellular automata (CA) model to study the growth of BC.

According to existing statistics, 80% of BC patients had occupational exposure to chemical carcinogens (rubber, dye, textile, or plant industry) or/and were smoking regularly during long periods of time. The carcinogens from the bladder lumen affect umbrella cells of the urothelium (epithelial tissue surrounding bladder) and then subsequently penetrate to the deeper layers of the tissue (intermediate and basal cells). It is a years-long process until the carcinogenic substance will accumulate in the tissue in the quantity necessary to trigger DNA mutations leading to the tumor development. We address carcinogen penetration (modeled as a nonlinear diffusion equation with variable coefficient and source term) within the cellular automata (CA) framework of the urothelial cell living cycle. Our approach combines both discrete and continuous models of some of the crucial biological and physical processes inside the urothelium and yields a first theoretical insight on the initial stages of the BC development and growth.

For the treatment, we present a modeling study of bladder cancer via pulsed immunotherapy with Bacillus Calmette-Gue´rin (BCG) - an attenuated strain of Mycobacterium bovis (M. bovis). Impulsive differential equations are used for studying periodic BCG instillations (pulsed BCG therapy). The mathematical relationships between schedule (pulsing frequency) and dose (therapy strength) are determined through appropriate mathematical analysis. The final goal in all this work is to determine the applicable treatment regime that prevent immune system side effects from BCG and enhance tumor destruction.

**Authors: Svetlana Bunimovich-Mendrazitsky, Helen Byrne, Eliezer Shochat, Eugene** **Kashdan, Israel Chaskalovic and Lewi Stone**

The most commonly used theory of option pricing is the Black-Scholes PDE model, which is used to obtain the expectation (first moment) of an option's current value. We show that modified Black-Scholes PDEs can be used to obtain the n-th moment of an option's current value. We demonstrate how to find the second moment and the zeroth order moment for different standard options (European options, barrier options, American options), and use this to find the variances of the option values, and the probabilities to expire worthless.These latter two quantities give us a perspective of the option's risk, which is important in investment decisions and in pricing theories.

In this talk we will discuss some optimization problems in insurance. We consider a portfolio containing heterogeneous risks. The premiums of the policyholders might not cover the amount of the payments which an insurance company pays the policyholders. When setting the premium, this risk has to be taken into consideration. On the other hand the premium that the insured pays has to be fair. This fairness is measured by a function of the difference between the risk and the premium paid. For a given small probability of insolvency, we find the premium for each class, such that the difference function is minimized. Further results are achieved by doing the calculations in terms of utility instead of money. We find that by choosing the appropriate utilities function it is possible to

derive a wide range of premium principles as the optimal solution. Finally, we expand the results to the long-run model by considering a Markov chain in order to calculate the probabilities of insolvency during the years.

Flow through heterogeneous porous media is well represented by

parabolic equations. One of the well known practical aspects of this

theory is groundwater movement through soil.

The talk is dedicated to the problem of large scale groundwater

modeling where the available groundwater level data and pumping tests

are scarce, and in addition, the spatial distribution of the

groundwater data is not homogeneous. For a specific pumping area, head

data are available, while for other large areas of the aquifer limited

data is available. In such an aquifer the use of the classical

numerical methods may lead to inaccurate results. This is due to the

fact that the whole calibration process is validated using limited and

scarce groundwater data. Thus, in regions where head data are not

available the calibration is less accurate. Moreover, the head results

are sensitive to changes of the permeability parameters.

In the present contribution, we propose a modeling approach based on

cell model. The present cell model (called the ACM method) is

currently operating at a regional scale and its goal is to compute the

groundwater fluxes entering into or leaving a given region using a

first level calibration model that conserves the mass balance. The

cell model is derived from the general groundwater flow equations

using a finite volume approach combined with a mixed formulation.

Within this advanced cell model, the cells are defined according to

the hydrogeology of the aquifer and the state variables may be whether

the water level and/or the flux rate entering or leaving the cells.

The general software was developed using MATLAB 7 software connected

via COM technology with a Visual Basic graphical user interface.

The approach was implemented on a real case study of the Yarkon

Taninim aquifer in Israel

The ACM approach allows introducing to the model recharge input and

boundary conditions at a large scale. This enables getting better

estimates of the mass balance in the aquifer.

Also the ACM model may be used as a pre-model for large scale modeling

and combined with a high resolution model for a specific region where

the boundary conditions are created by the ACM model.

Energy conversion devices, such as fuel cells, lithium ion batteries,

and photocatalytic devices operate by selective conduction of charged ions

through a membrane. The membranes are created by emerging polymer

electrolytes in a solvent in which the polymers spontaneously form

nanoscale pore networks which serve as primitive ion channels.

I this talk, I present a novel model for the self-assembly of the nanoscale

pore network as a gradient ן¬ ow along classes of competing interfacial and

bending energies. I present a sharp interface analysis of the model, and

show that the evolution laws for the pores are given by high-order

Ricci-curvature flows, coupled to interfacial dynamics.

We use our model, in conjunction with experimental scattering data, to

study the morphology of Nafion, the industry standard polymer electrolyte

membrane used in Fuel Cells.

This is a joint work with Keith Promislow.

We present a new classification engine based on the concept of alpha-shapes. Our technique is easy to implement and use, time-effective and generates good recognition results. We show how to efficiently use the concept of alpha-shapes of low dimension to support data in arbitrary dimension, thus overcoming the lack of shape algorithms in high dimensions. We further show how to elegantly choose suitable primitives to capture desirable shapes that tightly bound the data. We present experiments showing that our technique generates good results with Optical Character Recognition (OCR) tasks. Based also on strong theoretic properties, we believe that our technique can serve as a desirable classification engine for various domains in addition to OCR.

Isotonic regression is a nonparametric approach for fitting monotonic models to data that has been widely studied from both theoretical and practical perspectives. However, this approach encounters computational and statistical overfitting issues in higher dimensions. To address both concerns we present an algorithm, which we term Isotonic Recursive Partitioning (IRP), for isotonic regression based on recursively partitioning the covariate space through solution of progressively smaller "best cut'' subproblems. This creates a regularized sequence of isotonic models of increasing model complexity that converges to the global isotonic regression solution. Models along this sequence are often more accurate than the unregularized isotonic regression model because of the complexity control they offer. We quantify this complexity control through estimation of degrees of freedom along the path. Furthermore, we show that IRP for the classic l2 isotonic regression can be generalized to convex differentiable loss functions such as Huber's loss. In another direction, we use the Lasso framework to develop another isotonic path of solutions that is computationally more expensive but offers even better complexity control. Success of the regularized models in prediction and IRP's favorable computational properties are demonstrated through a series of simulated and real data experiments.

We outline a proof of the rigidity statement in the positive

mass theorem with charge incorporating the modified Jang equation.

This is joint work with M. Khuri.

We here present 3 systems from completely different domains dominated

by the same basic dynamics: Collapse to an absorbing state accompanied

by reoccupation of space from neighboring points in space. The

properties of these systems is dominated by their spatial parameters,

such as the dimension and the diffusion rate.

We study the competition between two types of yeast over the usage of

complex sugars, where a cheater uses existing resources and does not

contribute to the existence of the population. Another system we study

is the usage of resources in modern economy. Finally, we study complex

structure formation in a catalyst induced proliferation. We show that

these very different systems are dominated by very similar dynamical principle.

A scanning path is a path having a direct line of sight (not

intersecting any obstacle) to any point in free space. I will present

the problem of the optimal scanning path as the shortest of all

scanning paths. I will discuss possible methods of approximating the

optimal scanning path. (Joint work With Slavic Shamshanov)

- Last modified: 24/10/2017