You are here: Home Community Building Undergraduate Research Alliance Projects
Document Actions

Projects

by William Joel — last modified 2012-08-17 18:13
  1. Development of a Quantitative Style Model for 3D Character Motion

    Particpants:

    • William J. Joel (Project leader)
    • Jörn Loviscach
    • Jorge Dorribo Camba
    • Mark Chavez
    • Ron Coleman
    • Zhigeng Pan

    Description:

    In order to better understand, and to simulate, 3D character motion, this project will attempt to create a style model that will quantitatively represent a character's motion. Initially, this will be limited to walk and run cycles. Such a model could be used in various ways, such as first-person games, 3D cartoon character animation, etc. Each character should have a unique set of parameters for said model. Data for the model can be extracted, via motion capture, from live action video, cartoon animations, or synthetically
  2. Toward a Naturally Signing Avatar

    Particpants:

    • Rosalee Wolfe (Project leader)
    • John McDonald
    • Marie Stumbo (BS '14)
    • Allison Lewis (BS '12)
    • Farah Thompson (BS '13)

    Description:

    Our goal is to translate English to American Sign Language, the language of the Deaf in North America. American Sign Language is different from English, with its own unique grammar. It is at least as different from English as any other natural language. An automatic English to ASL translator would give Deaf people greater access to the hearing world. Currently we are developing tools to generate ASL as animation in response to spoken English. Development of an automated synthesizer for ASL will make more information accessible to Deaf people on a more economical basis. It has the potential to allow the deaf to participate in and more fully understand the exchanges among a hearing audience in classrooms, meetings, and other venues. The synthesizer will also provide Deaf people with a better tool than English documents or notes for understanding content. Last year, we focused on lower body support for ASL, and this year, we are focusing on efficiency of rendering, with a goal of real-time display.  For more information, see http://asl.cs.depaul.edu
  3. Ray Tracing Visualization Toolkit

    Particpants:

    • Christiaan Gribble (faculty)
    • Jeremy Fisher (student, project leader)
    • Daniel Eby (student)
    • Ed Quigley (student)
    • Frank Serra (student, project leader)
    • Andrew Claudy (student)
    • Gideon Ludwig (student)

    Description:

    We introduce the Ray Tracing Visualization Toolkit (rtVTK), a collection of programming and visualization tools supporting visual analysis of ray-based rendering algorithms. rtVTK comprises a library for recording and processing ray state, together with a flexible software architecture for visualization components, integrated via an extensible GUI. rtVTK enables an investigator to inspect, interrogate, and interact with the computational elements of the ray tracing algorithm itself, thereby promoting a deeper understanding of how computation proceeds. Our goal is to employ real-time ray tracing for applications in fields as diverse as science, engineering, history, and the arts. Many of these applications require predictive images, or those in which computer-generated results are identical to the photo- and radio- metric values obtained by measuring a scene in the physical world. Ray-based rendering algorithms are ideally suited to this task. Typically, these algorithms simulate the behavior of photons as they interact with objects in an environment according to the laws of geometric optics. These interactions are often very complex, and depend on the spatial arrangement of objects in a scene, their material properties, and the optical effects captured by the particular algorithm in use. Moreover, generating a high-fidelity result requires computing many millions (if not billions) of ray/object interactions. Thus, the complexity encountered in predictive rendering applications limits the practicality of current approaches for real- time image synthesis. Even with recent advances targeting highly parallel platforms [van Antwerpen 2011], many seconds of computation are required for results to converge. As such, rendering predictive images at real-time frame rates is not yet feasible. Importantly, predictive applications also require that advanced ray- based rendering algorithms be physically correct for the results to be effective---much more so than applications requiring simply plausible or visually convincing results. Here, too, the complexity of scene geometry, material properties, and optical effects used in predictive rendering leads to difficulties in ensuring program correctness: traditional software debugging tools are not designed to leverage the inherent visual nature of computer graphics computation, and thus lead to a cumbersome debugging process. We believe that effective visualization of ray tracing state will promote deeper understanding of how computation proceeds, addressing a wide variety of problems in ray-based rendering. For example, a subtle and long-standing bug related to secondary ray generation in a batch renderer has been exposed as a result of visualization with rtVTK. Similarly, anecdotal evidence from an undergraduate computer graphics course suggests that students of ray tracing are better able to grasp the algorithm’s details by interacting with a visual representation of the computation. Moreover, visualization with rtVTK may enhance tasks in ray tracing performance analysis by enabling insights beyond the summary statistics provided by traditional analysis tools. Finally, because of its flexibility and extensibility, we believe that rtVTK will be well-suited to new, possibly unforeseen, problem domains and application areas as well. See http://www.rtvtk.org

    References:

    van Antwerpen , D. 2011. Improving SIMD efficiency for par allel Monte Carlo light transport on the GPU. In Proceedings of High Performance Graphics 2011, 41–50.

Powered by Plone CMS, the Open Source Content Management System

This site conforms to the following standards: