AGH University of Science and Technology
Department of Computer Science
In this paper we present some methods and algorithms for large scale computing which cover different areas of computational and computer sciences. They concern particle models, CFD computing, animation, monitoring and predicting of... more
In this paper we present some methods and algorithms for large scale computing which cover different areas of computational and computer sciences. They concern particle models, CFD computing, animation, monitoring and predicting of application performance as well as scientific visualization and scientific data storing and retrieving.
The goal of the EU IST int.eu.grid project is to build middleware facilities which enable the execution of real-time and interactive applications on the Grid. Within this research, relevant support for the HEP application is provided by... more
The goal of the EU IST int.eu.grid project is to build middleware facilities which enable the execution of real-time and interactive applications on the Grid. Within this research, relevant support for the HEP application is provided by Virtual Organization, monitoring system, and real-time dispatcher (RTD). These facilities realize the pilot jobs idea that allows to allocate grid resources in advance and to analyze events in real time. In the paper we present HEP Virtual Organization, the details of monitoring, and RTD. We present the way of running the HEP application using the above facilities to fit into the real-time application requirements.
The paper describes a sequential MD algorithm in which distances between particles are evaluated using fixed point arithmetics. Errors introduced by the method are estimated. Some simulation timings as well as fluctuations of the total... more
The paper describes a sequential MD algorithm in which distances between particles are evaluated using fixed point arithmetics. Errors introduced by the method are estimated. Some simulation timings as well as fluctuations of the total energy are compared to the results obtained using floating point arithmetics.
This paper describes Graph Investigator, the application intended for analysis of complex networks. A rich set of application functions is briefly described including graph feature generation, comparison, visualization and edition. The... more
This paper describes Graph Investigator, the application intended for analysis of complex networks. A rich set of application functions is briefly described including graph feature generation, comparison, visualization and edition. The program enables to analyze global and local structural properties of networks with the use of various descriptors derived from graph theory. Furthermore, it allows to quantify inter-graph similarity by
The oceanic lithosphère subducts into deep mantle due to the negative buoyancy forces through fractures in the near surface layer and plastic flow in the central portions (1). Once subducted, lithospheric plates will encounter another... more
The oceanic lithosphère subducts into deep mantle due to the negative buoyancy forces through fractures in the near surface layer and plastic flow in the central portions (1). Once subducted, lithospheric plates will encounter another resistant force near the 660 km discontinuity. Evidence for these opposing forces in the deep transition zone has been well documented through the analysis of focal mechanisms of deep and intermediate earthquakes (2), but the ultimate fate of subducted lithosphère has become detectable ...
- by Witold Dzwinel and +1
- •
A novel technique based on cluster analysis of the multi-resolutional structure of earthquake patterns is developed and applied to observed and synthetic seismic catalogs. The observed data represent seismic activities situated around the... more
A novel technique based on cluster analysis of the multi-resolutional structure of earthquake patterns is developed and applied to observed and synthetic seismic catalogs. The observed data represent seismic activities situated around the Japanese islands in the 1997-2003 time interval. The synthetic data were generated by numerical simulations for various cases of a heterogeneous fault governed by 3-D elastic dislocation and power-law creep. At the highest resolution, we analyze the local cluster structure in the data space of seismic events for the two types of catalogs by using an agglomerative clustering algorithm. We demonstrate that small magnitude events produce local spatio-temporal patches corresponding to neighboring large events. Seismic events, quantized in space and time, generate the multi-dimensional feature space of the earthquake parameters. Using a non-hierarchical clustering algorithm and multidimensional scaling, we explore the multitudinous earthquakes by real-time 3 -D visualization and inspection of multivariate clusters. At the resolutions characteristic of the earthquake parameters, all of the ongoing seismicity before and after largest events accumulate to a global structure consisting of a few separate clusters in the feature space. We show that by combining the clustering results from low and high resolution spaces, we can recognize precursory events more precisely and decode vital information that cannot be discerned at a single level of resolution.
Grid Virtual metacomputer, which uses a network of geographically distributed local networks, computers and computational resources and services. Grid Computing focuses on distributed computing technologies, which are not in the... more
Grid Virtual metacomputer, which uses a network of geographically distributed local networks, computers and computational resources and services. Grid Computing focuses on distributed computing technologies, which are not in the traditional dedicated clusters. Data Grids -represent controlled sharing and management of large amounts of distributed data. Problem solving environment (PSE) A specialized computer software for solving one class of problems. They use the language of the respective field and often employ modern graphical user interfaces. The goal is to make the software easy to use for specialists in fields other than computer science. PSEs are available for generic problems like data visualization or large systems of equations and for narrow fields of science or engineering. Global seismographic network (GSN) The goal of the GSN is to deploy permanent seismic recording stations uniformly over the earth's surface. The GSN stations continuously record seismic data from very broad band seismometers at 20 samples per second, and to provide for high-frequency (40 sps) and strongmotion (1 and 100 sps) sensors where scientifically warranted. It is also the goal of the GSN to provide for real-time access to its data via Internet or satellite. Over 75% of the over 128 GSN stations meet this goal as of 2003. WEB-IS A software tool that allows remote, interactive visualization and analysis of large-scale 3-D earthquake clusters over the Internet through the interaction between client and server. Scientific visualization is branch of computer graphics and user interface design that are dealing with presenting data to users, by means of patterns and images. The goal of scientific visualization is to improve understanding of the data being presented. Interactive visualization is a branch of graphic visualization that studies how humans interact with computers to create graphic illustrations of information and how this process can be made more efficient. Remote-visualization -the tools for interactive visualization of high-resolution images on remote client machine, rendered and preprocessed on the server.
- by Witold Dzwinel
- •
A new model for thermal convection simulations using molecular dynamics (MD) approach is reported brie y. Preliminary results are presented. Development of the method is discussed shortly
- by Witold Dzwinel and +1
- •
- High performance, Thermal Convection
In this paper we present short-range molecular dynamics algorithms and programs elaborated for sequential and vector computer architectures. They are suitable for investigations of surface and subsurface layer properties which require of... more
In this paper we present short-range molecular dynamics algorithms and programs elaborated for sequential and vector computer architectures. They are suitable for investigations of surface and subsurface layer properties which require of about 10 5 molecules in a computational box.
- by Witold Dzwinel and +2
- •
- Mathematical Sciences, Physical sciences
In this paper we present some methods and algorithms for large scale computing which cover different areas of computational and computer sciences. They concern particle models, CFD computing, animation, monitoring and predicting of... more
In this paper we present some methods and algorithms for large scale computing which cover different areas of computational and computer sciences. They concern particle models, CFD computing, animation, monitoring and predicting of application performance as well as scientific visualization and scientific data storing and retrieving.
We check-out a set of statistical network descriptors for classification of various vascular networks generated by model of Tumour Induced Angiogenesis. In the model two spatio-temporal scales representing global behaviour of vascular... more
We check-out a set of statistical network descriptors for classification of various vascular networks generated by model of Tumour Induced Angiogenesis. In the model two spatio-temporal scales representing global behaviour of vascular network and local processes in surround tissue are clearly separated. The grid of cellular automata (CA) corresponds to both healthy and cancerous tissues as well as it is used as the environment for simulating the fast processes of nutrients (O 2 ) and TAF (tumour angiogenic factors) diffusion. Vascular network is modelled by using irregular graph of cellular automata placed on the top of CA grid. The model is controlled by a selected set of parameters reflecting influence of various factors such as: TAF and O 2 concentration and model parameters responsible for vessels sprouting and anastomosis. We discuss an application of single network descriptors against approach based on using a set of various descriptors subjected multidimensional analysis method.
- by Witold Dzwinel
- •
Microstructural dynamics and boundary singularities generate complex multiresolution patterns, which are difficult to model with the continuum approaches using partial differential equations. To provide an effective solver across the... more
Microstructural dynamics and boundary singularities generate complex multiresolution patterns, which are difficult to model with the continuum approaches using partial differential equations. To provide an effective solver across the diverse scales with different physics the continuum dynamics must be augmented with atomistic models, such as non-equilibrium molecular dynamics (NEMD). The spatiotemporal disparities between continuum and atomistic approaches make this coupling a computationally demanding task. We present a multiresolution homogeneous particle paradigm, as a cross-scale model, which allows producing the microscopic and macroscopic modes in the mesoscopic scale. We describe a discrete-particle model in which the following spatio-temporal scales are obtained by subsequent coarse-graining of hierarchical systems consisting of atoms, molecules, fluid particles and moving grid nodes. We then show some examples of 2-D and 3-D modeling of the Rayleigh-Taylor fluid instability, phase separation, colloidal arrays and colloidal dynamics in the mesoscale by using fluid particles as the exemplary discretized model. The modeled multiresolution patterns are similar to those observed in laboratory experiments. We show that they can mimic scales ranging from single micelle, colloidal crystals, colloidal aggregates up to the macroscopic phenomena involving the clustering of red blood cells in the vascular system. We can summarize the computationally homogeneous discrete-particle model in the following hierarchical scheme: non-equilibrium molecular dynamics (NEMD), fluid particle model (FPM), thermodynamically consistent DPD and smoothed particle hydrodynamics (SPH). ᭧
KEYWORDS 22 Fibrin polymerization; Blood modeling; Capillary vessels; Discrete-particles 23 24 25 26
The paper presents a molecular dynamics C-language program suitable for mixtures of mono-atomic molecules of different types include in a cuboid box' with periodic boundary conditions. The molecules mutually interact with the short-range... more
The paper presents a molecular dynamics C-language program suitable for mixtures of mono-atomic molecules of different types include in a cuboid box' with periodic boundary conditions. The molecules mutually interact with the short-range Lennard-Jones potential. To solve the Newtonian equations of motion the leapfrog scheme is applied. Neighbours of a particular molecule are searched using the link-cell method. The program has been developed for microcomputers/workstations so that it incorporates the sequential algorithm optimized in respect with CPU-time.
Dissipative particle dynamics (DPD) and its generalization -the fluid particle model (FPM) -represent the "fluid particle" approach for simulating fluid-like behavior in the mesoscale. Unlike particles from molecular dynamics (MD) method,... more
Dissipative particle dynamics (DPD) and its generalization -the fluid particle model (FPM) -represent the "fluid particle" approach for simulating fluid-like behavior in the mesoscale. Unlike particles from molecular dynamics (MD) method, the "fluid particle" can be viewed as a "droplet" consisting of liquid molecules. In FPM, "fluid particles" interact by both central and non-central, short-range forces with conservative, dissipative and Brownian character. In comparison to MD, FPM method in 3-D requires two to three times more memory load and three times more communication overhead. Computational load per step per particle is comparable to MD due to the shorter interaction range allowed between "fluid particles" than between MD atoms. The classical linked-cells technique and decomposing the computational box into strips allow for rapid modifications of the code and for implementing non-cubic computational boxes. We show that the efficiency of the FPM code depends strongly on the number of particles simulated, geometry of the box, and the computer architecture. We give a few examples from long FPM simulations involving up to 8 million fluid particles and 32 processors. Results from FPM simulations in 3-D of the phase separation in binary fluid and dispersion of colloidal slab are presented.
Simulating natural phenomena at greater accuracy results in an explosive growth of data. Large-scale simulations with particles currently involve ensembles consisting of between 10 6 and 10 9 particles, which cover 10 5 -10 6 time steps.... more
Simulating natural phenomena at greater accuracy results in an explosive growth of data. Large-scale simulations with particles currently involve ensembles consisting of between 10 6 and 10 9 particles, which cover 10 5 -10 6 time steps. Thus, the data files produced in a single run can reach from tens of gigabytes to hundreds of terabytes. This data bank allows one to reconstruct the spatio-temporal evolution of both the particle system as a whole and each particle separately. Realistically, for one to look at a large data set at full resolution at all times is not possible and, in fact, not necessary. We have developed an agglomerative clustering technique, based on the concept of a mutual nearest neighbor (MNN). This procedure can be easily adapted for efficient visualization of extremely large data sets from simulations with particles at various resolution levels. We present the parallel algorithm for MNN clustering and its timings on the IBM SP and SGI/Origin 3800 multiprocessor systems for up to 16 million fluid particles. The high efficiency obtained is mainly due to the similarity in the algorithmic structure of MNN clustering and particle methods. We show various examples drawn from MNN applications in visualization and analysis of the order of a few hundred gigabytes of data from discrete particle simulations, using dissipative particle dynamics and fluid particle models. Because data clustering is the first step in this concept extraction procedure, we may employ this clustering procedure to many other fields such as data mining, earthquake events and stellar populations in nebula clusters.
We investigate the physical mechanism of aggregation of red blood cells (RBC) in capillary vessels, using a discrete particle model. This model can accurately capture the scales from 0.001µm to 100µm, far below the scales, which can be... more
We investigate the physical mechanism of aggregation of red blood cells (RBC) in capillary vessels, using a discrete particle model. This model can accurately capture the scales from 0.001µm to 100µm, far below the scales, which can be modeled numerically with classical computational fluid dynamics. We use a discrete-particle model in 3D for modeling the flow of plasma and RBCs in a capillary tube. The two situations involving necking and no necking have been considered. The flexible viscoelastic red blood cells and the walls of the elastic vessel are made up of solid particles held together by elastic harmonic forces. The blood plasma is represented by a system of dissipative fluid particles. We have simulated the flow of cells of different shapes, such as normal and "sickle" cells. The cells coagulate in spite of the absence of adhesive forces in the model. The total number of fluid and solid particles used ranges from 1 to 3 million. We conclude that aggregation of red blood cells in capillary vessels is stimulated by depletion forces and hydrodynamic interactions. The cluster of "sickle" cells formed in the necking of the conduit efficiently decelerates the flow, while normal cells can pass through. These qualitative results from numerical simulations accord well with laboratory findings.
In the paper the assumptions of the particles method are presented and adapted to macroscopic quasi-particles and quasi-potentials. The system representing a physical object consists of a large number of mutually interacting "particles"... more
In the paper the assumptions of the particles method are presented and adapted to macroscopic quasi-particles and quasi-potentials. The system representing a physical object consists of a large number of mutually interacting "particles" (N ~ 10~+). The model can be used as an alternative to the model of continuous medium described by the set of partial differential equations, usually solved by the finite elements method (FEM). The advantages and disadvantages of both approaches for the simulation of shocking phenomena are discussed. The unique character of the presented method for simulation of non-continuities of the matter under stress is exemplified by the preliminary results obtained for simulation of projectile and long rod penetration.