[vslist] Journal of Vision: Special Issue: "The Modelfest dataset: Analysis and modeling"

Journal of Vision announcements@journalofvision.org
Mon Sep 13 18:44:01 2004


This is a multi-part message in MIME format.

------=_NextPart_000_1358_01C499D1.EAD356F0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit


Call for Papers: Special Issue


The Modelfest dataset: Analysis and modeling


Over the past 40 years psychophysical and physiological studies have
revealed the multi-channel, parallel processing structure of the human
visual system. This enhanced understanding has been accompanied by
development of numerous models of spatial vision. 

Unfortunately, direct comparisons of these models on the same data sets
have rarely been made. Instead, researchers have generally tested their
own model with their own data. Alternatively, interested researchers
trying to make comparisons have struggled to reproduce models from
incomplete published descriptions. At the 1997 meeting of the Optical
Society of America, a workshop was organized to address this problem.
This workshop ultimately gave rise to the ModelFest group: an
international consortium of vision researchers focused on the goal of
providing a public database of stimuli and psychophysical thresholds for
testing and developing models of human spatial vision.

Through extensive discussion, this group eventually arrived at consensus
on a set of 43 stimuli, as well as on methods of data collection. The
stimuli were selected to both calibrate candidate models, and to test
them. The first results were submitted to the group in 1999. All the
ModelFest stimuli and data are now available on the internet at
http://neurometrics.com/projects/Modelfest/IndexModelfest.htm  and at
http://vision.arc.nasa.gov/modelfest/. The present database includes 43
stimuli, two thirds of which are Gabor patterns, singly or combined in
different ways. The remaining patterns include a line, edge,
checkerboard, sample of spatial noise, natural scene and other patterns.

The Modelfest approach offers a dramatic change from how vision modeling
has proceeded in the past. By using a common database of stimuli and
psychophysical thresholds, researchers have a simple way of comparing
model performance and thereby learning from the innovations and
limitations of each model.

To disseminate recent approaches to vision modeling and to promote the
idea of comparing model performance on a common dataset, we invite
researchers to contribute to a Special Issue of the Journal of Vision.
This special issue will focus on applying vision models to the ModelFest
dataset and include related topics on modeling spatial vision, including
but not limited to:

*	Summary and review of the ModelFest dataset 
*	Statistical analysis of the ModelFest dataset 
*	Dataset limitations, stimuli that should have been included 
*	Application of vision models to the Modelfest data set 
*	Statistical questions regarding comparison of models 
*	Comparison of the Modelfest data to results from the literature.



Guest Editors:

Thom Carney
University of California at Berkeley, CA, and Neurometrics Institute,
Oakland, CA
thom@neurometrics.com <mailto:thom@neurometrics.com>  

Christopher W. Tyler
Smith-Kettlewell Eye Research Institute, San Francisco CA
cwt@mail.ski.org <mailto:cwt@mail.ski.org>   www.ski.org/cwt
<http://www.ski.org/cwt>  


Deadline for submissions:

December 1, 2004 

Target publication date:

May 1, 2004 

Journal of Vision encourages the use of images, color, movies,
hyperlinks, and other digital enhancements. To submit a paper to this
special issue please follow the Instructions for Authors.
<http://journalofvision.org/info/info_for_authors.aspx>  

------=_NextPart_000_1358_01C499D1.EAD356F0
Content-Type: text/html;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<HTML>

	<HEAD>
      <TITLE>Journal of Vision - Call for Papers: Special Issue - Optics =
in Vision</TITLE>
      <meta http-equiv=3D"content-type" =
content=3D"text/html;charset=3Diso-8859-1">
	=09
      <link href=3D"http://journalofvision.org/styles/arvo.css" =
type=3D"text/css" rel=3D"stylesheet">
      <link href=3D"http://journalofvision.org/styles/jov.css" =
type=3D"text/css" rel=3D"stylesheet">
   </HEAD>

	<body bgcolor=3D"white">
 		<!-- content starts here -->
<h3>Call for Papers: Special Issue</h3>
<h2><b>The Modelfest dataset: Analysis and modeling</b></h2>
<p>Over the past 40 years psychophysical and physiological studies have =
revealed the  multi-channel, parallel processing structure of the human =
visual system. This enhanced understanding has been accompanied by =
development of numerous models of spatial vision.&nbsp;</p>
<p>Unfortunately, direct comparisons of these models on the same data =
sets have rarely been made. Instead, researchers have generally tested =
their own model with their own data. Alternatively, interested =
researchers trying to make comparisons have struggled to reproduce =
models from incomplete published descriptions. At the 1997 meeting of =
the Optical Society of America,&nbsp;a workshop was organized to address =
this problem. This workshop ultimately gave rise to the ModelFest group: =
an international consortium of vision researchers focused on the goal of =
providing a public database of stimuli and psychophysical thresholds for =
testing and developing models of human spatial vision.</p>
<p>Through extensive discussion, this group eventually arrived at =
consensus on a set of 43 stimuli, as well as on methods of data =
collection.  The stimuli were selected to both calibrate candidate =
models, and to test them. The first results were submitted to the group =
in 1999. All the ModelFest stimuli and data are now available on the =
internet at <a =
href=3D"http://neurometrics.com/projects/Modelfest/IndexModelfest.htm">ht=
tp://neurometrics.com/projects/Modelfest/IndexModelfest.htm</a>&nbsp; =
and at <a =
href=3D"http://vision.arc.nasa.gov/modelfest/">http://vision.arc.nasa.gov=
/modelfest/</a>. The present database includes 43 stimuli, two thirds of =
which are Gabor patterns, singly or combined in different ways. The =
remaining patterns include a line, edge, checkerboard, sample of spatial =
noise, natural scene and other patterns.</p>
<p>The Modelfest approach offers a dramatic change from how vision =
modeling has proceeded in the past. By using a common database of =
stimuli and psychophysical thresholds, researchers have a simple way of =
comparing model performance and thereby learning from the innovations =
and limitations of each model.</p>
<p>To disseminate recent approaches to vision modeling and to promote =
the idea of comparing model performance on a common dataset, we invite =
researchers to contribute to a Special Issue of the <i>Journal of =
Vision</i>. This special issue will focus on applying vision models to =
the ModelFest dataset and include related topics on modeling spatial =
vision, including but not limited to:</p>
<ul><li>Summary and review of the ModelFest dataset
			<li>Statistical analysis of the ModelFest dataset
			<li>Dataset limitations, stimuli that should have been included
			<li>Application of vision models to the Modelfest data set
			<li>Statistical questions regarding comparison of models
			<li>Comparison of the Modelfest data to results from the =
literature.&nbsp;
		</ul>
<h4>Guest Editors:</h4>
Thom Carney<br>
University of California at Berkeley, CA, and Neurometrics Institute, =
Oakland, CA<br>
<a href=3D"mailto:thom@neurometrics.com">thom@neurometrics.com</a>
			<br><br>

Christopher W. Tyler<br>
Smith-Kettlewell Eye Research Institute, San Francisco CA<br>
<a href=3D"mailto:cwt@mail.ski.org">cwt@mail.ski.org</a>&nbsp; <a =
href=3D"http://www.ski.org/cwt">www.ski.org/cwt</a>
			<br>
<h4>Deadline for submissions:</h4>December 1, 2004
<h4>Target publication date:</h4>May 1, 2004
			<br>
			<br>
Journal of Vision encourages the use of=20
images, color, movies, hyperlinks, and other digital enhancements. To=20
submit a paper to this special issue please follow the <a =
href=3D"http://journalofvision.org/info/info_for_authors.aspx">Instructio=
ns=20
for Authors.</a>

      <!-- content ends here -->

	</body>

</HTML>
------=_NextPart_000_1358_01C499D1.EAD356F0--