Engineering Optimizations via Nature-Inspired Virtual Bee Algorithms
ABSTRACT Many engineering applications often involve the minimization of some objective functions. In the case of multilevel optimizations or functions with many local minimums, the optimization becomes very difficult. Biology-inspired algorithms such as genetic algorithms are more effective than conventional algorithms under appropriate conditions. In this paper, we intend to develop a new virtual bee algorithm (VBA) to solve the function optimizations with the application in engineering problems. For the functions with two-parameters, a swarm of virtual bees are generated and start to move randomly in the phase space. These bees interact when they find some target nectar corresponding to the encoded values of the function. The solution for the optimization problem can be obtained from the intensity of bee interactions. The simulations of the optimization of De Jong’s test function and Keane’s multi-peaked bumpy function show that the one agent VBA is usually as effective as genetic algorithms and multiagent implementation optimizes more efficiently than conventional algorithms due to the parallelism of the multiple agents. Comparison with the other algorithms such as genetic algorithms will also be discussed in detail. |
Connections between the Lines: Augmenting Social Networks with Text
ABSTRACT Network data is ubiquitous, encoding collections of relation-ships between entities such as people, places, genes, or cor-porations. While many resources for networks of interest-ing entities are emerging, most of these can only annotate connections in a limited fashion. Although relationships be-tween entities are rich, it is impractical to manually devise complete characterizations of these relationships for every pair of entities on large, real-world corpora. In this paper we present a novel probabilistic topic model to analyze text corpora and infer descriptions of its enti- ties and of relationships between those entities. We develop variational methods for performing approximate inference on our model and demonstrate that our model can be prac-tically deployed on large corpora such as Wikipedia. We show qualitatively and quantitatively that our model can construct and annotate graphs of relationships and make useful predictions. |
Using Hollywood movie trailers, UC Berkeley researchers have succeeded in decoding and reconstructing people's dynamic visual experiences.
The brain activity recorded while subjects viewed a set of film clips was used to create a computer program that learned to associate visual patterns in the movie with the corresponding brain activity. The brain activity evoked by a second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Using the new computer model, researchers were able to decode brain signals generated by the films and then reconstruct those moving images. Eventually, practical applications of the technology could include a better understanding of what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases. It may also lay the groundwork for brain-machine devices that would allow people with cerebral palsy or paralysis, for example, to guide computers with their minds. The lead author of the study, published in Current Biology on September 22, 2011, is Shinji Nishimoto, a post-doctoral researcher in the laboratory of Professor Jack Gallant, neursoscientist and coauthor of the study. Other coauthors include Thomas Naselaris with UC Berkeley's Helen Wills Neuroscience Institute, An T. Vu with UC Berkeley's Joint Graduate Group in Bioengineering, and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics. Full story: http://newscenter.berkeley.edu/2011/0... Video produced by Roxanne Makasdjian, UC Berkeley Media Relations |
Pattern recognition in bioinformatics
AbstractPattern recognition is concerned with the development of systems that learn to solve a given problem using a set of example instances, each represented by a number of features. These problems include clustering, the grouping of similar instances; classification, the task of assigning a discrete label to a given instance; and dimensionality reduction, combining or selecting features to arrive at a more useful representation. The use of statistical pattern recognition algorithms in bioinformatics is pervasive. Classification and clustering are often applied to high-throughput measurement data arising from microarray, mass spectrometry and next-generation sequencing experiments for selecting markers, predicting phenotype and grouping objects or genes. Less explicitly, classification is at the core of a wide range of tools such as predictors of genes, protein function, functional or genetic interactions, etc., and used extensively in systems biology. A course on pattern recognition (or machine learning) should therefore be at the core of any bioinformatics education program. In this review, we discuss the main elements of a pattern recognition course, based on material developed for courses taught at the BSc, MSc and PhD levels to an audience of bioinformaticians, computer scientists and life scientists. We pay attention to common problems and pitfalls encountered in applications and in interpretation of the results obtained. |
30% OF ALL DONATIONS
|