Date and Location | Speaker | Topic | Reading |
9/11 15:30-16:30 Eng Center ECCR 265 |
Gerhard Fischer, Comp Sci. Dept, University of Colorado | Rich landscapes of
learning: Exploring core competentcies for
MOOCs and residential, research-based universities Learning
is the central
activity of the 21st century. It needs to be reconceptualized,
nurtured, and supported to meet numerous intellectual and economic
challenges by taking advantage of transformative theoretical frameworks
and innovative technologies. Massive, Open, Online Courses (MOOCs) are
receiving world-wide attention as a means to revolutionize education.
The excitement and hype around MOOCs is grounded in promises being
disruptive, being free, and providing a totally new kind of learning
experience. The attention for MOOCs has moved beyond academic circles.
Neither panacea nor snake oil, MOOCs evoke serious questions that
deserve informed debate grounded in the learning sciences complementing
the current existing discussions from economics and
technology.
The presentation will analyze MOOCs as one component of a rich
landscape for learning. In doing so, MOOCs can serve as a forcing
function to identify and reflect on the core competencies of
residential, research-based universities (such as CU Boulder) in
nurturing and supporting aspects of learning that cannot be easily
addressed by MOOCs.
|
|
9/12 12:00-13:30 Muenzinger D430 |
Todd Gureckis, Psychology Dept., NYU | Understanding the
decision to learn Any complete theory of human learning must explain not only what is gleaned from the information we experience, but also the capacity for our choices and actions to expose that information. Interestingly, many experimental studies of learning and memory emphasize "passive" learning by limiting participants’ control over the information they experience at each point in time. In this talk, I will discuss recent work in my lab exploring how people gather information in "self-directed" learning environments - those where the learner is in control of what to learn about and when to learn it. The primary aim of this research is to characterize the information sampling strategy that participants use to reduce their uncertainty, and to examine how self-directed learning influences acquisition of new knowledge. The evidence presented in the talk suggests three key take-home points: 1.) people can learn faster when they can select and sequence learning episodes themselves, but this depends, in a dynamic way, on the structure of the to-be-learned concepts and the space of hypotheses that the learner considers 2.) people select information gathering strategies in an adaptive fashion which trades off their expected performance and implied cognitive effort and 3.) self-directed learning helps to enhance memory by helping learners coordinate stimulus presentation with their current preparatory or attentional state. Implications of this work for education, instructional design, as well as the cognitive science of learning will be entertained. |
![]() |
9/18 15:30-16:30 ECCS 265 | Aaron Clauset, Computer Science, U. of Colorado | research area: computational social science, network theory, complex systems | |
9/19 12:00-13:30 Muenzinger D430 |
Samar Husain Linguistics Dept., University of Potsdam |
Strong expectations
cancel locality effects: Evidence from
Hindi Expectation-driven facilitation (Hale, 2001; Levy, 2008) and locality-driven retrieval difficulty (Gibson, 1998, 2000; Lewis & Vasishth, 2005) are widely recognized to be two critical factors in incremental sentence processing; there is accumulating evidence that both can influence processing difficulty. However, it is unclear whether and how expectations and memory interact. We first confirm a key prediction of the expectation account: a Hindi self-paced reading study shows that when an expectation for an upcoming part of speech is dashed, building a rarer structure consumes more processing time than building a less rare structure. This is a strong validation of the expectation-based account. In a second study, we show that when expectation is strong, i.e., when a particular verb is predicted, strong facilitation effects are seen when the appearance of the verb is delayed; however, when expectation is weak, i.e., when only the part of speech “verb” is predicted but a particular verb is not predicted, the facilitation disappears and a tendency towards a locality effect is seen. The interaction seen between expectation strength and distance shows that strong expectations cancel locality effects, and that weak expectations allow locality effects to emerge. |
|
9/29 12:00-13:00 Muenzinger E214 |
Stephen Palmer, UC Berkeley, Psychology Dept. | Visual perception and aesthetics | |
10/6 16:00-17:00 Muenzinger E214 | Matt Jones, Psychology Department, U. of Colorado | Learning and
Representation Representation -- the manner in which information or knowledge is encoded in the mind -- is at the heart of cognitive science. Moreover, flexibility of representation is arguably fundamental to the power of human intelligence, in that representing a problem or task environment in a way that somehow captures its inherent structure can be critical to successful learning, generalization, and discovery. This talk will summarize my work on two questions: (1) how can we as researchers identify the representation a person is using in a given task, and (2) how do people adapt or construct representations to suit their needs? For the first question, I will focus on my work using sequential effects in repeated tasks to reveal representations and learning mechanisms, the basic principle being that sequential effects are a signature of how knowledge is updated. For the second question, I will describe a theoretical framework integrating theories of representation with reinforcement learning, and my lab's efforts to develop models that build new concepts by discovering structure in the world. Finally, I will summarize some of my more meta-theoretical work on the explanatory contributions of computational models of cognition, and explain how some of the problems I have identified in currently influential models of decision-making might be solved by incorporating my work on learning and sequential effects. |
|
10/10 12:00-13:30 Muenzinger D430 |
Jordan Boyd-Graber, Comp. Sci. Dept., University of Colorado |
Thinking on your Feet:
Reinforcement Learning for Incremental
Language Tasks In this talk, I'll discuss two real-world language applications that require "thinking on your feet": synchronous machine translation (or "machine simultaneous interpretation") and question answering (when questions are revealed one piece at a time). In both cases, effective algorithms for these tasks must interrupt the input stream and decide when to provide output. Synchronous machine translation is when a sentence is being produced one word at a time in a foreign language and we want to produce a translation in English simultaneously (i.e., with as little delay between a foreign language word and its English translation). This is particularly difficult in verb-final languages like German or Japanese, where an English translation can barely begin until the verb is seen. Effective translation thus requires predictions of unseen elements of the sentence (e.g., the main verb in German and Japanese, or relative clauses in Japanese, or post-positions in Japanese). We use reinforcement learning to decide when to trust our verb predictions. It must learn to balance incorrect translation versus timely translations, and must use those predictions to translate the sentence. For question answering, we use a specially designed dataset that challenges humans: a trivia game called quiz bowl. These questions are written so that they can be interrupted by someone who knows more about the answer; that is, harder clues are at the start of the question and easier clues are at the end of the question. We create a recursive neural network to predict answers from incomplete questions and use reinforcement learning to decide when to guess. We are able to answer questions earlier in the questions than most college trivia contestants. |
http://www.cs.colorado.edu/~ http://www.cs.colorado.edu/~ http://www.cs.colorado.edu/~ |
10/10 18:00-19:30 Duane Physics G125 | William Bechtel, Philosophy, UCSD | Networks and Dynamics:
21st Century Neuroscience This talk is part of the conference entitled, "Neurons, Mechanisms, and the Mind: The History and Philosophy of Cognitive Neuroscience". Two other talks at the conference are also authorized for the Topics course -- Carrie Figdor (U. of Iowa), "On the proper domain of psychological predicates" (Saturday Oct 11, 17:00-18:30, Duane G131) and Tor Wager (CU Boulder), "Role of verbal reports in studies on emotion and pain" (Sunday Oct 12, 16:00-17:30, Duane G125) |
|
10/17 12:00-13:00 Muenzinger D430 |
Tom Yeh, Computer Science, University of Colorado | 3D Tactile Picture Books
for Children with Visual Impairments Abstract: Tactile pictures and graphics are critical to the development of emergent literacy skills for children with visual impairments. However the practices of designing and producing tactile graphics have previously been limited to a small community due to the cost of manufacturing processes and bounded expertise in tactile graphic design and 3D modeling. In this talk, I will present findings from seven workshops, conducted to identify and evaluate the barriers stakeholders encounter when designing 3D printable tactile picture books, along with a series of design guidelines to reduce or eliminate these barriers. Moreover, I will describe a set of 3D-printable models designed as building blocks for creating movable tactile pictures that can be touched, moved, and understood by children with visual impairments. Examples of these models are canvases, connectors, hinges, spinners, sliders, lifts, walls, and cutouts. |
3D printed tactile
picture books for children with visual
impairments: a design probe
Technology to support
emergent literacy skills in young
children with visual impairments
|
10/23 15:30-16:30 ECCR
265 Warning: date may shift later in the semester to accommodate outside speakers |
Michael Mozer, Comp. Sci., University of Colorado | Bayesian Optimization:
From A/B Testing To A-Z Testing A/B testing is a traditional method of conducting a randomized controlled experiment to compare the effect of two treatments, A and B, on human subjects. For example, two alternative banner ads may be served to evaulate which is more effective in driving click throughs. A/B testing is used not only for marketing and web design but is the dominant paradigm in the experimental behavioral sciences used to understand human learning, reasoning, and decision making. Although the method can be extended to compare a handful of treatments, it does not solve the problem one often faces: searching over a large, possibly combinatorial or continuous space of alternatives to identify the treatment that achieves the best outcome. We describe a solution to this problem using Gaussian process surrogate-based optimization, a Bayesian method that relies on generative probabilistic models of human choice and judgment. Instead of assigning many human subjects to each of a few of treatments, the technique evaluates a few subjects on each of many treatments. The technique leverages structure in the space of treatments to infer the function that relates treatment to outcomes. We show the efficiency and accuracy of the technique on a range of problems, including: identifying preferred color combinations, maximizing charitable donations, and improving student learning of facts and concepts. |
|
10/24 12:00-13:00 Muenzinger D430 | Shaun Kane, Computer Science, University of Colorado | research area: Accessible user interfaces, mobile human-computer interaction |
reading |
10/30 15:30-16:30 ECCR 265 | Karon MacLean, University of British Columbia | Applied
Perspectives on Haptic Interaction with
Regard to
Attention, Affect and Pushing Robots Around Buzzing cell phones and jolty game controllers: This is where the vast majority of users today still are when they think "haptics" (interaction through the sense of touch), despite accelerating technical innovation in recent years. What will ultimately change this? Within the haptics research community and related industries, developments include pre-commercial advances in tactile and force feedback actuation and sensor development (much of it driven by the popularity and shortcomings of mobile and touchscreen interfaces). These are further spurred by advances in wearable and context-aware computing, robotics, embedded sensing and flexible graphic displays. Human computer interaction designers meanwhile seek haptic solutions to problems ranging from everyday to esoteric or highly specialized. MacLean’s group has tried to bring effective haptic interaction into people's lives by closely examining how touch (in either direction) can help address real human needs with the benefit of both low- and high-tech innovation. MacLean will give a sense of these efforts with several stories that highlight some of their driving research questions, including: -- Attention: Touch may be great way to offload the visual sense, but it can just as easily make matters worse. Through integration with contextual information, can we craft a display system with broad potential utility that is attentionally sustainable? -- Affect: What kind of affective information is available in gestural touch? If you could sense it (easily, at low cost) what are some things you could do with it? -- Pushing Robots Around: What's the right place for informal, functional touch in the close-proximity robot-human workplace? BIO: Karon MacLean is Professor of Computer Science at the University of British Columbia, Canada, with a B.Sc. in Biology and Mech. Eng. (Stanford) and a M.Sc. and Ph.D. (Mech. Eng., MIT) and time spent as professional robotics engineer (Center for Engineering Design, University of Utah) and interaction researcher (Interval Research, Palo Alto). At UBC since 2000, her research specializes in haptic interaction: cognitive, sensory and affective design for people interacting with the computation we touch, emote and move with, whether robots, touchscreens or mobile activity sensors. She has innovated in human computer interaction curriculum design and teaching practices. |
|
10/31 12:00-13:00 Muenzinger D430 | R. McKell Carter, Psychology and Neuroscience, University of Colorado | The temporal parietal
junction constructs a social context
for decision making Our preferences change dramatically with social context. While the presence of a grandmother may discourage the purchase of alcohol, the presence of an old friend may strongly increase the likelihood of drinking. In previous work, we have identified a region of the brain, the temporal parietal junction (TPJ), that is uniquely predictive of behavior in a social setting but not in a non-social setting. While this provides evidence that the TPJ is uniquely involved in social function, a number of alternative hypotheses describing TPJ function have been offered. In an effort to reconcile these alternative explanations, we propose the Nexus model of TPJ function. The Nexus model of TPJ function proposes that novel functions (like the ability to consider others intentions) arise when divergent processes like memory, attention, semantic, and social representations come into close proximity as is the case in the TPJ. This model makes specific function and localization predictions. I will describe ongoing work testing some of these predictions in both basic and translational settings, as well as some future work made possible by the extraordinary community at CU Boulder. We conclude that the TPJ constructs a social context that is utilized by frontal regions to produce the dramatic effects on decision making we see when interacting with others. |
reading |
11/20
16:30-17:30 Imig Music Chamber Hall (C199) |
Ian Quinn, Yale | Toward a computational
archeology of pre-18th century music
cognition Standard accounts of the music-theoretic revolution of the 18th century speak of a transition from modal to tonal organization in the music itself. One effect of this transition is a reduction in the number of modes from twelve (in the systems of Glarean and Zarlino) to two: major and minor. Computerized analytical work on datasets of late medieval office chants, early Lutheran chorales, 17th-century English madrigals, and other repertories suggests that the local organization of music in different modes, particularly between cadences, does not differ as much as the standard accounts (based on contemporaneous treatises rather than on musical data) would have us believe. The history of solmization, which, as in modern la-based minor, does not typically hypostatize scale degree prior to the introduction of the rule of the octave, reinforces this view. BIO: Ian Quinn is Professor and Director of Undergraduate Studies in the Department of Music at Yale, where he also teaches in the Cognitive Science Program. He has won the Emerging Scholar Award and the Outstanding Publication Award from the Society of Music Theory. He was Editor of the Journal of Music Theory from 2004 to 2011. Current projects focus on Steve Reich, 17th-century tonality, and computational methods for corpus analysis. |
|
12/4 15:30-16:30 ECCR 265 | Leysia Palen, Computer Science, University of Colorado | Frontiers in Crisis
Informatics Crisis informatics addresses socio-technical concerns in large-scale emergency response. Additionally it expands consideration to include not only official responders (who tend to be the focus in policy and technology-focused matters), but also members of the public. It therefore views emergency response as a much broader socio-technical system where information is disseminated within and between official and public channels and entities. Crisis informatics wrestles with methodological concerns as it strives to develop new theory and support informed development of ICT and policy. Palen will describe the range of work her team has engaged in at CU-Boulder since 2006, and highlight the different branches of crisis informatics research through discussion of the multidisciplinary research they have conducted here. |
|
12/4 17:00-18:00 SLHS 230 | Ron Gillam, Utah State | Information Processing,
Cognitive Load, and Language
Disorders: From Theory to Clinical Practice This presentation will summarize a neural efficiency model of language disorders that is based on concepts from evolutionary psychology, cognitive load theory and Cowan’s embedded processes theory of working memory. Dr. Gillam will explain how language disorders could result from interactions between deficits in biologically endowed language knowledge and working memory processes. He will present data from four preliminary studies that support the potential of functional Near Infrared Spectroscopy (fNIRS) as tool for informing our understanding of cognitive load and its relationship to attention, memory and language comprehension. Finally, he will discuss the therapeutic implications of the neural efficiency model and future research directions. |
|
12/5 9:00-10:00 SLHS 230 | Sandra Gillam, Utah State | Fuzzy Connections
Between Language Learning Principles and
Intervention Strategies: Evidence from Narrative Intervention Studies Alan Kamhi has suggested that we can improve clinical practices for children with language and learning disorders by employing intervention strategies that are based on learning principles from studies of language development. However, general learning principles do not always translate readily into effective language intervention practices. Even theoretically sound, well-intentioned, and carefully implemented interventions can result in equivocal outcomes. This presentation will summarize what we have learned about narrative language intervention procedures in light of new theories of working memory and language development. |
|
12/5 10:30-11:30 CINC "fishbowl" (conference room near building entrance) | Karl Moritz Hermann, Oxford |
Distributed Representations for Compositional Semantics
The mathematical representation of semantics is a key issue for Natural Language Processing (NLP). A lot of research has been devoted to finding ways of representing the semantics of individual words in vector spaces. However, natural language usually comes in structures beyond the word level, with meaning arising not only from the individual words but also the structure they are contained in at the phrasal or sentential level. In this talk we explore methods for learning distributed semantic representations and models for composing these into representations for larger linguistic units by exploiting neural models. This talk focuses on extending the distributional hypothesis to multilingual data and joint-space embeddings by leveraging parallel data. The models of this class do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages and tasks. Subsequently, I will present a novel technique for semantic frame identification, again by using distributed semantic representations. Here, we learn a model that projects the set of word representations for the syntactic context around a predicate to a low dimensional representation. With a standard argument identification method inspired by prior work, this approach achieves state-of-the-art results on FrameNet-style frame-semantic analysis. |
|
12/9, 16:00-17:00, Muenzinger E214 | Richie Davidson, Psychology & Psychiatry, University of Wisconsin Madison | Order and Disorder in
the Emotional Brain Emotions are at the core of human personality, they define each person’s uniqueness and they shape resilience and vulnerability to adversity. Perhaps the single most salient characteristic of emotion is the variability across individuals in how each responds to emotional cues and challenges. This variability is termed “affective style.” Different parameters of affective style can be objectively measured and are instantiated in different underlying neural circuits. Activation patterns assessed with neuroimaging are related to different parameters of affective style and are consistent over time within individuals. Specific patterns of brain activity are related to vulnerability to particular types of disorders. Moreover, patterns of central brain function are related to peripheral biological systems that play a role in physical health and illness. Despite their consistency over time within individuals, these patterns of neural activity are not immutable to change but rather can be transformed through systematic mental training such as meditation. The literature on neuroplasticity provides a framework for understanding these changes. This latter body of evidence supports the view that happiness, well-being and emotional balance are best regarded as the product of trainable skills. |
|
12/12 12:00-13:00 Muenzinger D430 | Susan Brown, University of Colorado |
From Visual Prototypes of Action to Metaphors: The Imagact Visual
Ontology and Its Extension to Figurative Meanings Action verbs are some of the most polysemous words, with one form often covering a wide range of physical actions, as well as extending to various figurative meanings. The range of variations within and across languages can cause trouble for second language learners and natural language processing tasks. IMAGACT is a corpus-based ontology of action concepts that makes use of the universal language of images to identify the different physical action types expressed by verbs in English, Italian, Chinese and Spanish. IMAGACT makes explicit the variation of meaning of action verbs within one language and allows comparisons of verb variations within and across languages. Because the action concepts are represented with videos, extension into new languages is easily done using competence-based judgments by native-language informants. In the first half of this talk, I will describe the resource's rationale and infrastructure and demonstrate the types of linguistic information a user can derive from it. In the second half, I will describe the extension of this resource to figurative meanings. We first established three main categories of secondary meaning variation--metaphor, metonymy and idiom--and criteria for creating types within these categories for each verb. The criteria rely heavily on the images that compose the IMAGACT ontology of action and on widely accepted processes of meaning extension in linguistics. Although figurative language is known for its amorphous, subjective nature, we have endeavored to create a standard, justifiable process for determining figurative language types for individual verbs. We specifically highlight the benefits that IMAGACT’s representation of the primary meanings through videos brings to the understanding and annotation of secondary meanings. |
reading1 reading2 |