Eventos

Abstracts

Alexandre Da Veiga

GeoFlow
GeoFlow brings high-performance, interactive, 3-D spatial-temporal data visualization to Excel 2013 and enables information workers to discover and share new insights from geographic and time data through rich 3-D storytelling and fluid, cinematic guided tours.

Bio: Alexandre da Veiga is a Senior Developer at Microsoft working on the graphics engine for Excel’s GeoFlow.  Prior to GeoFlow, Alex worked on several Microsoft geospatial products including Virtual Earth 3D and Bing Maps.  He received his B.S. in computer science from PUC-PR in Curitiba, Brazil, before joining Microsoft in 2001.

 

Allan Hanbury

A Cloud-based Evaluation Infrastructure for Medical Image Analysis and Search
The cloud-based infrastructure for evaluating machine learning and information retrieval algorithms on Terabytes of medical images, being built in the EU VISCERAL project (http://visceral.eu), is presented. Instead of downloading data and running evaluations locally, the data will be centrally available on the cloud and algorithms to be evaluated will be run on the cloud, effectively bringing the algorithms to the data. The design of the VISCERAL infrastructure is presented, concentrating on the components for coordinating the participants in the benchmark and managing the ground truth creation. The medical imaging benchmarks that will be run on this infrastructure are also described.

 

Carlos A. Moreira-Filho & Luciano da Fontoura Costa

Complex Network-Driven View of Genomic Mechanisms Underlying Common Diseases
Common diseases (CD), like cancer, heart disease, diabetes or acquired epilepsy, are caused by the interplay of genomic and environmental factors. Genes underlying these non-transmissible chronic diseases usually display higher number of connections in transcriptional networks, being called hubs because they connect many genes (nodes) that would not be connected otherwise. Hubs may coordinate or link together specific cellular processes. Therefore, in order to acquire a better understanding on the molecular mechanisms involved in a particular CD and its pathophenotypes it is mandatory to develop methods to study the complete set of valid transcripts in disease’s target tissue or cell population. The mathematical and computational tools for this task lay in the field of complex network analysis.
Here we present a methodology for complex network visualization (3D) and analysis that allows the categorization of network nodes according to distinct hierarchical levels of gene-gene connections, or node degree (hubs), and of interconnection between node neighbors, or concentric node degree (VIPs, high hubs). This methodology was applied to the investigation of genomic mechanisms underlying febrile (FS) and afebrile (NFS) forms of drug-resistant epilepsy by comparatively studying CA3 hippocampal co-expression networks of FS and NFS cases. Such approach enabled us to identify the distinct roles of the most highly connected hubs in each form of the disease and proved to be a useful tool for systems biology-based antiepileptic drug discovery. The network centrality observed for the hubs, VIPs and high hubs of CO networks, is consistent with the network disease model, where a group of nodes whose perturbation leads to a disease phenotype forms a disease module occupying a central position in the network. This result shows that the probability of exerting therapeutic effect through the modulation of single genes is higher if these genes are highly interconnected in transcriptional networks.

This work was supported by FAPESP grants 2009/53443-1 and 2005/56446-0 to CAM-F, and 2005/00587- 5 and 2011/50761-2 to L da F Costa.

 

Carole Goble

The Reality of Reproducibility of Computational Science
Reproducibility in principle underpins the scientific method. For an experimental finding to be reproducible its materials must be available and its methods clear, accurate and transparent. In this talk I will explore the reality of reproducibility in computational science, focusing on methods that use workflows. I will discuss what we mean by reproducibility; differentiate between the preservation and conservation of workflows; sketch the role provenance has to play; and point to a growing number of initiatives that aim to make in silico science reproducible, including our own first steps towards a reproducibility framework based on Research Objects. Although technical infrastructure helps towards the utopian ideal of truly reproducible science it is social factors that define the reality.

 

Chad Gaffield

Big Data, Digital Humanities and the New Knowledge Environments of the 21st Century
The past year proved to be the time when digital scholarship moved to the top of the agenda for schooling at all levels as well as for research fields across all disciplines. From Big Data to Massive Open Online Courses, intense debate among educators, scholars, and policy makers captured public attention around the world and has set the stage for concerted efforts to embrace the Digital Age. This presentation will focus on the key role that scholars across the social sciences and humanities are now playing to advance digital scholarship especially in interdisciplinary and international research teams. Examples will include those funded through the Image, Text, Sound and Technology program, the Canada Research Chairs program, the Networks of Centres of Excellence program, and the international Digging into Data initiative. Special attention will also be given to the upcoming World Social Science Forum entitled `Social Transformations and the Digital Age,` to be held in Montreal, October 13-16 as well as the proposed TransAtlantic Platform for the Social Sciences and Humanities in which Brazil is a partner.

 

Cristián Bonacic

Life in Andes
Many of Latin America’s endangered species are insufficiently studied; more information is needed to help preserve them. Scientists at Pontifical Catholic University of Chile, in coordination with Microsoft Research, have developed a tool that they believe will help: LiveANDES (Advanced Network for the Distribution of Endangered Species).  The LiveANDES platform stores and parses data points about wildlife and natural areas by using photographs, audio and video recordings, and location and sighting information. Researchers use the data to identify species, where they currently dwell, and possible threats to their future. The tool is also capable of parsing huge volumes of recorded data so that it is manageable for researchers.

Bio: Cristián Bonacic is an associate professor in the School of Agriculture and Forestry, Pontificia Universidad Catolica de Chile, and has led a wildlife conservation research group for more than 10 years in Chile. His research interests include automatic and remote systems for wildlife surveillance, citizen science, IT applied to wildlife conservation, and networking of biodiversity conservation scientists. He earned a D.Phil. in zoology at Britain’s Oxford University.

 

Dan Fay

Expanding Your Horizons
Increasing our professional visibility beyond what was possible in the past is becoming easier. Information sharing is faster and wider. The focus of this interactive discussion goes beyond traditional academic approaches to sharing results by examining the role of online connections and communication approaches to include things like social media. Come and join us; share your thoughts and best practices about how to expand research exposure.

Bio: Dan Fay is director of the Earth, Energy, and Environment effort at Microsoft Research Connections and works with academic scientists on related topics. Previously, he handled North America as part of the Technical Computing Initiative. Dan serves as a member of the Purdue University Computer and Information Technology Industrial advisory board. He is a graduate of Northeastern University.

 

Dennis Gannon

Cloud Computing for Forth Paradigm Challenges in Scientific Research
Cloud computing provides us with a new paradigm for addressing the computing and data analysis challenges confronting many scientific disciplines.  Unlike traditional supercomputers, the cloud can support different styles of computation that are well suited for collaboration and data analysis. For the last three years we have been working with academic researchers to explore the potential of this new platform.   We now have over 90 research projects using Windows Azure and we have learned a lot.   This talk will describe the discoveries we have made and outline the potential we see for future research using the cloud.  

Bio: Dr. Dennis Gannon is the Director of Cloud Research Strategy for Microsoft Research Connections. Dr. Gannon's research interests include cloud computing, data analytics and “big data” platforms, large-scale cyberinfrastructure, distributed computing, parallel programming, computational science and problem solving environments.   At Microsoft he and his team are working with the research community to demonstrate the potential of cloud computing to enable broad access to data-intensive scientific research.  Prior to coming to Microsoft, Dr. Gannon was a professor and former chair of computer science at Indiana University and the Science Director for the Indiana Pervasive Technology Labs.  He has published over 100 refereed articles and he has co-edited 3 books.  Dr. Gannon received his Ph.D. in Computer Science from the University of Illinois Urbana-Champaign after receiving a Ph.D. in Mathematics from the University of California, Davis.

 

 

Drew Purves

Predicting the Future of All Life on Earth
If humanity is going to safeguard its collective future, then it will need to make predictions about how different aspects of the Earth System – e.g. the carbon cycle, biodiversity, food production, deforestation, wood production, fertilizer use and of course climate – might change in the future under various scenarios. At the very centre of each of these aspects is life, so the challenge is nothing less than to predict the future of all life on Earth! At the Computational Ecology and Environmental Science group, we carry out novel, fundamental ecological research that is necessary to building predictive models of ecosystems, and develop the novel software that is required to do so. In this talk I will give examples of our work pertaining specifically the Amazon basin, including predictive models of carbon cycling and carbon storage, biodiversity (and its response to land-use change), leaf phenology, deforestation, and agricultural productivity. I will explain how we built these models using our in-house tools (including FetchClimate, Filzbach, and Distribution Modeller), and will be keen to discuss how we can combine the predictions of such models in order to support effective decision making in the Amazon, and more broadly.

Data-Constrained Environmental Modelling: FetchClimate, Flizbach, and Distribution Modeller
There is an obvious and urgent need to build predictive models of important environmental phenomena. Such models need to describe how variation in different aspects of the environment – such as climate and soil – affect the phenomenon of interest, e.g. primary productivity of plants, agricultural yield, or even land-use change. But to date, the building of such predictive models has been held back by a host of technical barriers, placing it outside the reach of many environmental scientists (and making it annoyingly difficult and slow for the rest!). In the first half of this tutorial we will concentrate on FetchClimate, an html5 browser application that makes it very easy and quick to get the environmental information needed to drive the models. You’ll learn how to use FetchClimate to perform several important classes of query, including grids of climatology statistics, collections of time series (whether year-to-year or day-to-day), and to get to some future climate predictions too. Next, we’ll move on to Filzbach, which is a generic Bayesian parameter estimation engine, allowing you to define an arbitrary model, then parameterize that model against data. We’ll give examples of using Filzbach from C++, R, and Matlab – and explain the key statistical concepts that Filzbach embodies, too. Finally, you will be among the first to try ‘Distribution Modeller’, a new browser application that ties FetchClimate, Filzbach, and other pieces together, to provide and end-to-end environment for rapidly building and parameterizing models – then pushing them into FetchClimate so that they can be run on demand by anyone, anywhere.

Bio: Drew Purves is a computational ecologist in the Computational Sciences Lab at Microsoft Research Cambridge. Purves studied and researched at Cambridge, York (U.K.), and Princeton before joining Microsoft Research Cambridge in 2007. His research focuses on populations and communities of plants, especially forests, and has led to about 20 publications in peer-reviewed journals, including “Science,” “PNAS,” and “Proc Roy Soc B.” Matthew Smith is a postdoctoral ecologist in the same lab. He has a B.Sc. in Ecology and a Ph.D. in Mathematical Ecology. Smith is a research generalist, driven principally by the desire to identify, and then apply, principles and methods that cut across disciplines in science. He has published 18 papers in peer-reviewed journals and is the co-author of four books. 

 

Eduardo César Marques

GIS applications for the Social Sciences and for Urban Studies in Brazil
The use of quantitative geographical analyses about social and urban processes is quite frequent internationally. In Brazil, however, the use of these tools began late, mainly due to the absence of publicly available databases. Since the 1990, the situation has improved substantially, with the free of charge provision of data from official agencies, but also from research institutions.  This presentation explores examples of different uses of spatialized quantitative information on São Paulo and about other Brazilian cities to characterize and investigate social processes, as well as to subsidize decision making processes in social and urban policies."

 

Harold Javid

Bio: Harold Javid is director of the Microsoft Research Connections regional programs for North America, Latin America and Australia/New Zealand. His team works with the academic research communities in these regions to build rich collaborations including joint centers in the US, Brazil and Chile; faculty summits and other events; and talent development programs such as the Microsoft Research Faculty Fellows program. Harold has a long career in research organizations, working for companies like General Electric, Boeing and now Microsoft. He has made advances in the application of optimization and computing algorithms in industries such as power, aerospace, and pulp and paper. Harold a member the Board of Governors of the IEEE Computer Society. He received his PhD in Electrical Engineering at the University of Illinois Urbana-Champaign where he made advances to optimization for multiple time-scale dynamic systems.

 

Jarek Pillard

Web Based Bioinformatics

 

Jie Liu

CLEO: Cultivating the Longtail of E-science Observations
The goal of project CLEO is to develop device and services to encourage and enable participatory sensing and citizen scientists. A core technology developed in the project is to make location sensing energy efficient, so devices can be small, light, sample more frequently, and low cost. The approach is called Cloud-Offloaded GPS (or CO-GPS). A second service is a web-based sensor data management service called CLEO DB. It leverage SQL Azure and OData interface to support open data access.

Location Sensing on Mobile Devices
Location-based services have become ubiquitous thank to the sensors like GPS and WiFi in our smart phones and other mobile devices. It has the potential of changing how scientists collect data and conduct experiments. However, continuous location sensing such as logging, tracking, and geo-fencing, consume too much energy and shorten device battery life. In this talk, we take a fresh look at location sensing, in both outdoor and indoor settings. For outdoor location, we dive into the principles of GPS receivers and show that by offloading GPS processing to the cloud, we can reduce the device side energy consumption by three orders of magnitude. For indoor location, we discover that commercial FM signals are good sources of location signatures that work better than WiFi signatures by themself, and works even better if combined with WiFi signatures. These low energy alternatives enable always-there location services without users paying battery life penalty.

Bio: Dr. Jie Liu is a Principal Researcher at Microsoft Research, Redmond, WA, and the manager of its Sensing and Energy Research Group. His research interests root in understanding and managing the physical properties of computing. Examples include timing, location, energy, and the awareness of and impact on the physical world. He has published broadly in areas like sensor networks, embedded systems, ubiquitous computing, and energy efficient cloud computing. Dr. Liu is an Associate Editor of ACM Trans. on Sensor Networks, has been an Associate Editor of the IEEE Trans. on Mobile Computing, and has chaired a number of top tier conferences. He is an ACM Distinguished Scientist. Dr. Liu received his Ph.D. degree from Electrical Engineering and Computer Sciences, UC Berkeley in 2001. From 2001 to 2004, he was a research scientist at Palo Alto Research Center (formerly Xerox PARC). 

 

Juliana Salles

Bio: Juliana Salles is Microsoft Research Connections’ senior research program manager in Brazil, where she engages with academics to identify globally critical, high-impact research projects. She is currently working on projects that use technology to enable or accelerate knowledge in such areas as tropical environments and their response to climate change, bioenergy, and biodiversity. She is also leading initiatives to attract and retain women in computing in Latin America. Juliana has a PhD in human-computer interaction and since joining Microsoft has worked as a UX researcher for several product teams, including Visual Studio, Windows Live, and Windows Live Mobile. Her interests include user research techniques and methodology and their integration with the software development process.

 

Kristin Tolle

DataUp
Data sharing, management, and curation have become critical to scientists as well as private and public agencies that support their work. DataUp makes it easy for scientists and researchers to integrate the archiving, sharing, and publishing of tabular data into scientific workflows.

Bio: Kristin M. Tolle, Ph.D. is a Director in the Microsoft Research Connections team and a Clinical Associate Professor at the University of Washington.  Since joining Microsoft, Dr. Tolle has acquired several patents and worked for several product teams including the Natural Language Group, Visual Studio, and Excel.  Prior to joining Microsoft, Dr. Tolle was a Research Associate at the University of Arizona Artificial Intelligence Lab managing the group on medical information retrieval and natural language processing. Her research interests include: contextual computing, natural language processing and machine translation, mobile computing, user intent modeling, Natural User Interactions and information extraction.

 

Marcos Buckeridge

The use of computing sciences in plant systems biology: How can this association help us to produce more and better food and bioenergy at the same time cope with the impacts of the global climate changes?
Food and Biofuel Production (R&BP) and Global Climate Changes (GCC) are modern transdisciplinary issues in which scientists are key contributors. Because plants are the primary source of biomass production in the planet due to their ability to perform photosynthesis as well as being the greater CO2 consumer, one of the most important scientific aspects that permeate those major issues for humanity is our level of understanding of how plants function. In this presentation, I will talk about how apparently simple results of plant physiology obtained for native plant species around 20 years ago turned out to be seen as extremely complex and interconnected network systems that led me to conclude that the only way to go further would be by integrating plant biology with mathematics and computing sciences. I will present one project (Microsof-FAPESP) that is now been developed in collaboration with computing scientists to develop new tools to integrate different scales within the functioning plant, i.e. transcriptomics, metabolomics and physiology. One long-term goal is to develop tools that could help to evaluate plant behaviour in the environment in a systemic way so that we could better evaluate the impacts of the GCC. The other is for crop plants. We expect that the knowledge and control of how systems integrate at indifferent scales will make possible the application of synthetic biology techniques to plants in order to produce more and better food and bioenergy.

Financed by Mirosoft-FAPESP, INCT is financed by FAPESP/CNPq, University of São Paulo (NAP e-science)

 

Maria Cristina F. de Oliveira

Challenging multidimensional data
In this talk I will present some illustrative examples of the difficulties faced daily by many professionals, scientists or not, when trying to make sense of data (their own or others´), which is often high-dimensional. We all need better tools for data analysis, and I hope to make the case that including visualization into our repertoire of tools for data analysis can make a lot of difference. However, using and developing visualizations demand several kinds of expertise, and many challenges remain to increase availability and usability of current techniques. I would like to motivate you into contributing to visualization research, discussing some of these challenges from the perspective of my experience with a particular category of techniques for visualizing high-dimensional data.

 

Marta Mattoso

User-Steering on Cloud Workflows
Scientific Workflow Management Systems have successfully demonstrated their capabilities on several scientific areas evidencing its contribution to E-Science. Executing workflows, with high performance computing, in cloud environments has many advantages and open issues such as user-steering on workflows. Many workflow users demand for steering features such as real-time monitoring, analysis and execution interference. The workflow execution should respond dynamically to such interference in execution to support the experimentation process, especially in the cloud. In this talk I will discuss research challenges of steering by scientists and present some ideas on how these issues may be supported in current workflow technologies. Querying workflow provenance data at run time plays an important role in user-steering, as shown in our own approach to face some of those challenges.

 

Ricardo da S. Torres

Detecting Remote Phenology Patterns: A Computer Vision Approach
"e-phenology: The application of new technologies to monitor plant phenology and track climate changes in the tropics.
Environmental changes are becoming an important issue in the world. An example that represents these problems arise in the context of phenology studies. Phenology, the study of natural recurring phenomena and its relation to climate, is a traditional science of observe the cycles of plants and animals and relate mainly to local meteorological data, as well as to biotic interactions and phylogeny.

Recently, phenology has gain importance as the simplest and most reliable indicator of the effects of climate change on plants and animals. The strongest results connecting, for instance, changes on timing of first flowering and leafing and bird migration to recent global warming has come from long term phenological series from North Hemisphere, were historical data sets have been collected for decades.

The scarcity or lack of information and monitoring systems in tropical regions, in particular, South America, has stimulated several research centers. Between many works being developed, e-phenology stands out, a multidisciplinary project combining researches in computing and phenology, with the purpose of solving practical and theoretical problems involved in the use of new technologies to remote observation of phenology."

Bio: Ricardo holds a Bachelor's degree in computer engineering (Universidade Estadual de Campinas-UNICAMP-, 2000) and PhD in Computer Science (Universidade Estadual de Campinas-UNICAMP-, 2004). Is a professor at the Instituto de Computação da Universidade Estadual de Campinas since 2005. Was Associate Coordinator and coordinator of the Bachelor's degree course in computer science at Unicamp in 7/2005 to 6/07/2007 and the 2007 06/2009, respectively. He was coordinator and coordinator of the Course of computer engineering at Unicamp 07/2009 of the 06/07/2010 and 2010 to 06/2011, respectively. He is currently Director of the Institute of computing of the Unicamp since 03/2013, also working as tenured professor (5.3 level, from 2012). He has published more than 100 papers in conferences and journals with editorial policy, selective and has served as a reviewer and coordinator of several conference program committees. Has worked on dozens of projects, being two Universal projects coordinator-CNPq and FAPESP Regular Line project. Guided five PhD theses, dissertations and 20 dozens of scientific research work, in addition to acting as a post-doctoral internship supervisor. Performs research in the areas of database, image processing and Digital Libraries, working mainly in topics related to image retrieval by content.

 

Rob DeLine

Improving the User Experience of Big Data Analytics
Data science today is like software development in the mainframe era: data scientists twiddle their thumbs waiting for big batch jobs to complete and shuffle data around between multiple independent tools, often through tedious clerical work. A typical workflow might include map/reduce systems (Hadoop), database management systems (MySql), spreadsheets, scripting environments (Python), statistical programs (R, Matlab) and machine learning tools (Weka). These bureaucratic workflows have several disadvantages, including the barrier of learning all these tools, the vigilance needed to prevent mistakes, the difficulty of preserving provenance and reproducibility, and the extra effort required to share data sets and analyses. To address these problems, I'll present a demo of our prototype environment for data science, called “Stat!”. The goal of “Stat!” is to allow a data scientist to accomplish an entire workflow, from raw data to final presentations, in one environment. This integration creates the opportunity for high productivity, automated checking, and preservation of data provenance. The project's long-term goal is to democratize data analysis so that, say, the average spreadsheet user can use statistics and machine learning to draw valid conclusions about a data set of her choice.

Bio: Dr. Robert DeLine is a principal researcher at Microsoft Research, who studies the work practices of software developers and, more recently, data scientists. From 2005 to 2012, Dr. DeLine founded and managed a research group dedicated the user-centered design of software development tools, with a focus on information seeking, program comprehension and task management. In collaboration with colleagues, he has invented development environments that exploit spatial memory (Debugger Canvas, Code Canvas), a recommendation system for program comprehension (Team Tracks), type systems to enforce API protocols (Fugue, Vault), a software architecture environment (UniCon), and a popular environment for end-user programming (Alice). He received his PhD from Carnegie Mellon University in 1999 and his MS from the University of Virginia in 1993.

 

Rob Fatland

Layerscape
A cloud-based user experience, Layerscape employs powerful, everyday tools to analyze and visualize complex Earth and oceanic datasets—enabling scientists to gain environmental insights into Earth. Users can create and share 3-D virtual tours based on their discoveries and collaborate with the Earth-science community in ways that previously seemed impossible. Build your own virtual tours and experience the possibilities.

Data Visualization In Layerscape
This tutorial will focus on data--specifically Visualization towards Insight—in relation to the Layerscape toolkit. We will begin with the basic navigation and structural concepts of the WWT application including time, viewing modes, data types, and data capacity. We then proceed to example datasets: How these are typically represented and how they can be imported into WWT for rendering. We next explore story-telling around the data through construction of Tours and we will discuss Tour publication at http://layerscape.org, including preservation of metadata.  We then proceed to discuss integration of data services such as a generic Web Mapping Service and FetchClimate. Throughout we will use Excel and the Excel Add In for Worldwide Telescope. Finally we will turn to the WWT API and the Developer’s Toolkit “Narwhal” that facilitates more complex rendering of data. By the conclusion of this tutorial the attendee should have a good concept of how the various parts of Layerscape fit together around the WWT visualization engine, and how an imaginative translation of geospatial and/or abstract data to pixels can give new understanding of what the numbers really mean.

Bio: Rob Fatland works at Microsoft Research on applications of technology to information challenges in environmental science. His career has included research in glacier dynamics and seismically-driven surface deformation based on data from synthetic aperture radar satellites. He has also worked on embedded systems technology, developing wireless sensor networks for harsh environments. At Microsoft Research, he works to release research tools, such as Layerscape (a collaboration/visualization system) and SciScope (a search engine for hydrology data), for adoption and use by both academic and operational geoscience communities.

 

Roman Snytsar

Chronozoom
ChronoZoom is an open-source community project dedicated to visualizing the history of everything. Big History is the attempt to understand, in a unified, interdisciplinary way, the history of cosmos, Earth, life, and humanity. By using Big History as the story line, ChronoZoom seeks to bridge the gap between the humanities and sciences an enable all this information to be easily understandable and navigable. 

 

 

Simon Mercer

Medical Imaging Initiative Demo
Medical imaging is the world’s most prolific generator of big data today. Intelligent machine-based analysis is the only way to practically process such vast amounts of data to produce medically-useful information in a timely manner. We believe that medical image analysis is being held back by a lack of high-quality, sharable and well-annotated image datasets for training and benchmarking algorithms. We plan a range of activities intended to reduce these bottlenecks, including a new MSR-built open source platform built on Azure in association with Stanford University. This platform will be demonstrated.

Bio: Simon Mercer has a background in zoology and has worked in various aspects of bioinformatics. Having managed the development of the Canadian Bioinformatics Resource, a national life science service-provision network, he later served as director of software engineering at Gene Codes Corporation before moving to the Microsoft Research Connections team in 2005. In his current role as director of health and wellbeing, he manages collaborations between Microsoft and academia in the area of healthcare research. Simon’s interests include bioinformatics, translational medicine, and the management of scientific data.

 

Tony Hey

Making a Difference for Science and Society

Bio: As vice president in Microsoft Research, Tony Hey is responsible for worldwide university research collaborations with Microsoft researchers. He also directs the multidisciplinary eScience Group within Microsoft Research. Prior to Microsoft, Tony served as director of the UK’s e-Science Initiative, where he oversaw government efforts to build a scientific infrastructure for collaborative, multidisciplinary, data-intensive research. Before that, he led a research group in parallel computing and was head of the School of Electronics and Computer Science and dean of Engineering and Applied Science at the University of Southampton. Tony is a fellow of the UK’s Royal Academy of Engineering and in 2005 was awarded the rank of Commander of the Most Excellent Order of the British Empire for services to science. He is a fellow of the British Computer Society, the Institute of Engineering and Technology, the Institute of Physics, and the American Association for the Advancement of Science. Passionate about communicating the excitement of science, he has co-authored popular books on quantum mechanics and relativity.

 

Vidya Natampally

VidWiki: Crowd-Enhanced, Online Educational Videos
Recent efforts by organizations such as Coursera, edX, Udacity, and Khan Academy have produced thousands of educational videos logging hundreds of millions of views in attempting to make learning freely available to the masses. While the presentation style of the videos varies by the author, they all share a common drawback: Videos are time-consuming to produce and are difficult to modify after release. VidWiki is an online platform to take advantage of the massive numbers of online students viewing videos to improve video-presentation quality and content iteratively, similar to other crowdsourced information projects such as Wikipedia. Through the platform, users annotate videos by overlaying content atop a video, lifting the burden on the instructor to update and refine content. Layering annotations also assists in video indexing, language translation, and the replacement of illegible handwriting or drawings with more readable, typed content.

Bio: Vidya is the Director of Strategy for Microsoft Research India and joined Microsoft Research India in 2006. She is responsible for Microsoft Research India’s external partnership and collaborations. Vidya heads Microsoft Research Connections, which aims to strengthen the computer science research ecosystem in India. Vidya works extensively with industry, government, and universities both within and outside India. The Microsoft Research Connections team at Microsoft Research India focuses on capacity building, research collaborations, and programs to address societal challenges and empower communities with tools and technologies. In addition, the team works with industry to encourage innovation. Prior to joining Microsoft, Vidya worked with a number of leading IT companies as a communications consultant where she has been instrumental in defining communications and business strategies.


Página atualizada em 03/06/2013 - Publicada em 02/05/2013