Snapshot Wisconsin Publication List
With the help of dedicated volunteers, the Snapshot Wisconsin project collects millions of statewide trail camera images that are transformed into usable data. Over the years, this data has been used for wildlife research and wildlife decision support by DNR scientists and collaborators at universities, which has been shared through publications.
Publications are a way for research scientists to share their findings and to ensure integrity in the scientific community. As the list of Snapshot contributed publications continue to grow, the Snapshot Wisconsin team would like to share them with you!
The Snapshot Publication webpage is now available for anyone to view! Publications are organized by topic, with research ranging from the temporal and spatial behavior of deer to predator-prey relationships. Key findings from each research project have been provided below each publication.
The valuable information gathered from these research projects will help further our understanding of our local wildlife and support wildlife management decisions. With the massive Snapshot Wisconsin dataset continuing to support wildlife research, the webpage will be periodically updated as publications are released!
Note: Not all publications are available to the public. Key findings can always be found below each publication listed on the Snapshot webpage.
Diving Into My Trip to Bonaire
Most readers of this blog know that Snapshot Wisconsin brings together people from around the globe who share an interest in classifying Wisconsin’s extraordinary wildlife. In addition to building connections with volunteers, Snapshot Wisconsin works to form connections with organizations, such as the Natural Resources Foundation of Wisconsin (NRF). This non-profit organization’s Diversity in Conservation Internship Program aims to introduce a diverse group of undergraduates to the many career paths in conservation. This summer, Snapshot is hosting one of the seven NRF interns, Mira Johnson.
Hello, my name is Mira, and I have been working with the Snapshot team over the last couple of months. During my time here, I have assisted in daily tasks like preparing equipment for volunteers and moderating Zooniverse. What’s more, I have had the remarkable opportunity to work on individual projects, like designing graphics for volunteer outreach materials, carrying out a small research study, and publishing this blog post! Last spring, I designed and conducted a small research project with two other students on an island called Bonaire. From this experience, I became more interested in learning how observational studies can be designed to reduce confounding variables, as that was a concern apparent in our study. This summer’s internship, spent immersed in the many projects underway at the DNR and being mentored by a research and data scientist, promises to significantly grow and deepen my understanding of reliable research practices.
Please allow me to share little about myself and my experience in Bonaire that I mentioned previously. I am a junior at Lawrence University in Appleton, Wisconsin, majoring in biology with a focus on marine systems. My interest in marine life began over many visits to the Monterey Bay Aquarium and tidepools of California with my grandfather. My growing fascination with marine life eventually led me to apply for the Lawrence University Marine Program in 2021. I was accepted, and in the Spring of 2022, fifteen students and I traveled to the island of Bonaire in the Southern Caribbean.
Over the two weeks in Bonaire, we surveyed reefs for biodiversity and conducted our small group research projects. Although I had been eagerly anticipating this trip for over a year, when the day finally arrived for our first dive, I was pretty nervous! I grew up in the Midwest, and my previous dive experience was limited to diving a couple of times in the murky lakes of Minnesota when I received my SCUBA certification. This resulted in a first dive where my eyes were mostly glued to my depth gauge and air supply. It wasn’t long however before the tension began to wash away as I glided across the reef identifying the vibrant life below.
Once everyone became comfortable with diving, we surveyed for coral biodiversity using chain transects. This method involved long periods of hovering above the reef as we waited while the videographer swam along each chain. It was during moments like these that we could attentively inspect and appreciate the marine life around us. As we hung neutrally buoyant, I was able to spot some of the reef’s shyer species, like the Queen Angelfish, the Chain Moray, and the Spotted Drum.
When we were not performing chain transects, we were out gathering data for our small group research projects. My group chose to conduct a study on locations on the reef where cleaning behavior (a mutualistic interaction where cleaner fish remove ectoparasites from client fish) between fish occur. Our experiment looked at sites where cleaning interactions took place, which led to an interesting finding that most client fish were cleaned above corals (as opposed to over sponges, anemones, or neither). It was when arriving at the analysis and interpretation stage of our study that we realized that various interpretations could be made, including some valid opposing arguments. With this experience I began to realize the importance of a well-designed study, and this pushed me to want to learn more about reliable research practices.
As I work on my small research project using the Snapshot Wisconsin database, I am learning ways in which I can develop a well-designed study. Anticipating problems and biases that might arise in the analysis stage is extremely valuable in correctly interpreting the results. In the planning stage, I have observed that involving a diverse group of people early on is helpful, as a data scientist might foresee analytical problems that a research scientist may not, and vice versa. It is exciting to see the values of the NRF Diversity in Conservation Internship Program in action during my time here with Snapshot Wisconsin, and I have enjoyed contributing to the team. All this said, I look forward to continuing to engage with the Snapshot Wisconsin project and interacting with you all on Zooniverse!
A Snapshot From Above
Getting the chance to venture out of the office on a sunny Wisconsin spring day?! Count me in!
In late March 2022, I was lucky enough to tag along with fellow DNR colleagues into the field as they collected aerial images of trout streams for a scientific study. With an interest in general aviation, I am usually the first to jump into the right-seat of a small plane. However, this time, I wouldn’t even make it off the ground!
So, how do you capture streams from above? By unmanned aerial vehicles (UAVs)!
More commonly known as drones, these aerial vehicles are frequently used by researchers throughout the world to capture aerial imagery and data. You can find drones being used in a variety of scientific disciplines ranging from the energy industry to wildlife research to climate physics.
As we traveled almost 1.5 hours west of Madison, we found ourselves in the driftless region of Wisconsin. The target streams had been pre-selected by the DNR’s Office of Applied Science (OAS) researchers. This particular scientific study will provide quantitative data on the effects beaver have on cold-water stream habitats and trout populations. You can learn more about the study here.
Arriving at the first stream, we took a few minutes to get out of the truck and stretch. Ryan Bemowski (OAS Unmanned Aerial Systems Coordinator) set up the drone and programmed the device to follow a preferred trajectory. Nick Hoffman (DNR fisheries technician) verified the set trajectory, and once the drone was programmed, Ryan started it up. He had programmed the drone to fly at about 300 feet off the ground and up it went!
When the programmed altitude was reached, the drone began moving in the direction of the stream we aimed to capture. My role was to follow the drone on foot and help make sure that we could keep an eye on the device at all times- being weary of trees, power lines, manned aircraft, etc. To get the full image, the drone followed the targeted section of the stream twice.
While following the drone, I got a first-hand glimpse of the cold, clean waters of the driftless area- so clear, you could see large trout swimming. The goal of the study is to monitor recolonization of beaver and measure their impacts on water temperature, stream structure, and trout movement and population dynamics. The occasional beaver dam would pop up during our survey, showing us where beaver are returning to the stream.
These collected images, among other habitat data, will be used to better understand the effects of beaver activity and beaver control on salmonids in streams. The image data not only identifies the location of beaver activity, but allows us to directly measure the size of beaver structures and area of beaver impoundments. Infrared temperature sensors mounted on the UAV even let us directly measure surface water temperatures along the length of the stream, identifying cold water springs and possible temperature changes above and below beaver dams.
It was interesting to see the perspective from above, which as you all know, looks a little different than the typical snapshot our project cameras capture! Imagery data, whether captured by UAV or Snapshot Wisconsin trail cameras, plays a critical role in scientific research.
With my interest in general aviation and planes, it was really exciting and a great experience to add this to my belt. I am grateful for this opportunity and especially grateful to my colleagues, Ryan Bemowski and Nick Hoffman for letting me tag along.
A National Collaboration Releases Their First Publication
The following piece was written by OAS Communications Coordinator Ryan Bower for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link.
For the last three years, Snapshot Wisconsin has been contributing to a similar citizen science program called Snapshot USA, and Snapshot USA recently reached an important milestone worth celebrating. They released their first publication! Congratulations, Snapshot USA!
In honor of Snapshot USA reaching this milestone, the Snapshot Wisconsin team wanted to highlight this fellow citizen science project and share with our volunteers a lesser-known way that Snapshot Wisconsin data is being used.
What is Snapshot USA?
Snapshot USA is a national effort to bring together trail camera data from across the country and learn about what drives the distribution of mammal species within the United States. Snapshot USA takes a similar approach to Snapshot Wisconsin, having people classify trail camera photos to generate usable data for science. The main differences are that Snapshot USA is a nationwide effort and is focused entirely on mammals.
Snapshot USA was organized in 2019 by scientists from the North Carolina Museum of Natural Sciences and the Smithsonian Conservation Biology Institute. They asked fellow researchers, citizen science programs (including Snapshot Wisconsin) and private citizens to upload and classify their trail camera photos. Much to everyone’s excitement, over 150 people and programs participated in the effort.
Better yet, people contributed photos from 110 locations across all 50 states, proving that there are people all over this country who value efforts like Snapshot USA and Snapshot Wisconsin.
Snapshot Wisconsin’s Contributions
For our part, the Snapshot Wisconsin program was thrilled to support a fellow citizen science project. We submitted data from 2019 and 2020, and we are working on submitting data from this last year as well. It is important to us to support other programs like Snapshot USA and build up science together.
Every year, Snapshot Wisconsin has contributed data from around 10 of our trail cameras in the Clam Lake elk camera grid. Snapshot USA required at least ten cameras clustered within a 5km area, so only a few areas of our camera grid like the elk camera grids met the requirement.
Despite limitations in which cameras we could include, the Snapshot Wisconsin team is glad that we were able to contribute to this effort at all. Jennifer Stenglein, one of Snapshot Wisconsin’s lead scientists, said, “Snapshot Wisconsin is not set up to have areas with clustered cameras. We made a special exception for the elk grid because the data are used to monitor the growing elk herds across the state. Fortunately, the elk grid matched the minimum requirements to participate.”
Stenglein also mentioned how important it was to her personally that Snapshot Wisconsin contributed to this nationwide effort. “As a scientist, having open data is huge. There is so much trail camera data out there, but it’s [isolated] to specific programs or people. Snapshot USA created a place for trail camera data to come together and be available. That allows scientists to ask questions we couldn’t before, like how climate change is impacting species at a national level.” Stenglein is excited to see what other researchers do with the compiled data.
How Snapshot USA Operates
In addition to sampling populations from across a wider scale than Snapshot Wisconsin, Snapshot USA samples from all major habitats and development zones found within the United States. When a new collaborator joins the program, they select the combination of setting (Urban, Suburban, Rural, Wild, Other) and habitat (Forest, Grassland, Desert, Alpine, Beach, Anthropogenic, Other) that matches their camera site.
Our volunteers may notice that some of these site combinations differ from what Snapshot Wisconsin uses. For example, urban deployment does not fit Snapshot Wisconsin’s criteria for setting up a camera. Stenglein thought that the addition of urban areas adds an interesting element to the dataset, but it shows a fundamental difference in what Snapshot Wisconsin and Snapshot USA are trying to capture.
Next, collaborators upload their photos from a specified time window. In 2019, Snapshot USA collected photos from the 14-week period from August to November. Once uploaded, collaborators could start classifying photos, similarly to how our camera hosts do it.
One important difference is that Snapshot USA puts all their photos through a second round of classification – this time by an expert. Expert review happens within Snapshot Wisconsin as well, but only for the species we’ve learned are classified with lower accuracy. Our accuracy analyses have shown that volunteers do a great job of accurately classifying most species, especially the most common species, so Snapshot Wisconsin only expertly classifies the photos of the hard-to-classify species and rare species. Besides, Snapshot Wisconsin would not be able to expertly classify its 60+ million photos. However, this extra step is possible for a program like Snapshot USA.
“Limiting the time window for data collection is really common in trail camera studies,” said Stenglein. “I don’t know if there is any perfect time window for Snapshot USA to choose, since you will always miss something. However, it does make sense for them to select a window of time. It would be too challenging to collect a whole year’s worth of data, let alone have an expert review.”
Once both rounds of classifications are done, the data are assembled into a package and prepped for release in the form of a new publication. This type of publication is called a “data paper” because its main purpose is to release a new dataset for others to work with.
“It’s a cool, new trend in science for data papers to come out,” said Stenglein. “I’ve seen more effort being put towards proper archiving of data. Researchers can use these datasets to test their own hypotheses and come up with new and exciting insights into wildlife distributions in the USA. I think this is where research needs to be, so it’s encouraging to see this trend.”
2019 Data Is Released
In April 2021, Snapshot USA officially published their 2019 dataset. The paper was published in the scientific journal Ecology and had around 100 different authors.
In total, the dataset included photos from 1,509 cameras across 110 locations, and all 50 states and the District of Columbia contributed data. The dataset had 166,036 observations (photos) and found 83 unique mammal species. Seventeen bird species were also detected, which impressed Stenglein, but the project wasn’t looking for birds, only mammals.
“All together, that’s an impressive number of species detected,” said Stenglein. “Trail cameras aren’t set up to see all species equally. Birds, for example, often spend most of their time above the line of sight of cameras, so capturing 17 species of birds is pretty cool.”
Snapshot Wisconsin’s contributions included sightings of just over 20 of the 83 mammal species found by Snapshot USA. Given the small area that the photos came from, seeing 20 species is a healthy number. If we were able to use more of the grid, that number would have been much higher.
The paper reported that the three most detected species nationwide were white-tailed deer, squirrels and raccoons, in that order. Snapshot Wisconsin’s own data visualization tool, the Data Dashboard, also shows a similar trend, with white-tailed deer and squirrels being the top two species detected in Wisconsin. Racoons weren’t third, but they are high on the list.
Coyotes were the most widespread species detected across the nation, which surprised some of the Snapshot Wisconsin team. However, Stenglein explained, “It may be because there is only one major species of coyote. Deer and other common animals change species as you go across the country. Mule deer, white-tailed deer and black-tailed deer each have different ranges across the U.S.”
Stenglein was proud of the Snapshot USA team for pulling this effort together. As one of the main researchers for Snapshot Wisconsin, Stenglein knows how much work it is to collect photos from hundreds of sources and extract usable data from them. Stenglein mentioned that it is great to see another citizen science project release their first publication. “Our Snapshot Wisconsin team only has so much capacity to work on decision-support tools, so it is cool to know that these data will be used in more ways and by more people.”
Stenglein also mentioned that there is a second publication in the works already. This publication will release the 2020 dataset. It’s nice to see such a quick turn around time for the second publication.
“The peer review process can easily take months to years,” explained Stenglein, “so there will always be a lag. However, I expect that this first lag will be the biggest. I’ve already seen process improvements on the data uploading side. They’ve moved to a more efficient process, which really helps.”
Stenglein believes Snapshot USA has expanded its data collection to Europe as well for the 2021 season, which could offer some interesting comparisons for researchers.
Stenglein’s final thoughts for the Snapshot USA program were:
“I’m so impressed that they pulled this off. We know from Snapshot Wisconsin how difficult it can be to keep things running smoothly, especially when it comes to IT infrastructure and solutions. I wish Snapshot USA all the luck as they continue to expand their program, and I look forward to working with them each year. What you’ve accomplished is impressive. Remember that.”
Elk Camera Updates
Co-authored by Ally Magnin and Emily Donovan
You may have noticed recently on the blog that the Snapshot team has spent some time in the Northwoods conducting fieldwork in the elk grids. But what was our motive as researchers?
Wisconsin’s elk herds are dynamic and do not necessarily occupy the same area all the time. Young individuals may temporarily disperse, cows split off from larger herds to give birth, and the herd as a whole may shift their range as they seek out suitable habitat. While this is to be expected, it creates an interesting problem for camera trap research.
Since 2015, Snapshot Wisconsin has had a portion of our project dedicated to monitoring reintroduced elk herds. Cameras were deployed in grids much smaller than the usual Snapshot Wisconsin grid, increasing the density of cameras in the herd reintroduction area and making it more likely to capture photos of elk. As the herds shifted their range, however, some cameras no longer detected elk. To begin to address this mismatch between our elk grids and the herds’ ranges, Data and Spatial Analyst Emily Buege Donovan conducted an analysis.
Donovan began by combining several elk-related data sources to assess the quality of each camera site. Among these data sources were the most recent GPS locations of collared elk. A portion of the state’s elk are fitted with GPS collars, which transmit a location every 13 hours. GPS collar data is commonly used in wildlife research and management to better understand the movement patterns and resource selection of animal populations. In the present study, Donovan used these data to predict the likelihood that a camera will regularly detect elk. See Figure 1 for an example of the camera locations in relationship to the collar data. Camera locations in the northwest portion of the map have low probability of capturing elk, whereas cameras in the southeast have a high probability of capturing elk photos.
However, because not all elk in Wisconsin are collared, the collar data could not be used exclusively to determine whether a camera site should remain active in elk monitoring efforts. Donovan also needed to bring in the historical elk detections for each camera site. How long had it been since an elk was detected at this site? How many elk photos were taken by each camera? By combining the collar data, photo data, and several other factors, such as ease of access by the volunteer and habitat type, Donovan created a scoring system to determine the best camera locations. Low scoring cameras were marked for removal, and high scoring cameras were marked to stay on the landscape.
Once we determined which elk blocks should be removed, we reached out to the volunteer who was assigned to each of those blocks and requested their assistance in removing the camera. For the blocks that didn’t have a volunteer assigned, our team planned fieldwork for the summer of 2020 to remove the cameras.
Many of the cameras marked for removal were deployed over three years ago, so navigating to them proved difficult in some cases. We traversed tamarack swamps, bushwhacked through thick understory, hopped across streams, and puzzled over satellite imagery to reach each destination. Our team enjoyed the challenge!
In addition to removing old cameras, we also conducted camera checks on the blocks that didn’t currently have a volunteer assigned in order to get them ready for a new volunteer to monitor, and replaced cameras that had shown signs of malfunction. We made it a priority to take diligent notes about how to navigate to each camera site to make navigation easier for future volunteers.
Overall, it was a very productive field season that provided the team with the opportunity to step away from our computer screens and into the outdoors. It also gave us an even greater appreciation for the work our volunteers do to monitor their cameras.
Are you interested in monitoring a camera as a part of our elk project? Sign up today at elk.snapshotwisconsin.org. Applications are reviewed when blocks open up, and we will contact you with more information once you’re accepted!
Check out our other elk-related blog posts below:
Elk Snapshots Mean Better Elk Modeling
How Much Do Elk Antlers Weigh?
Taking a Bite Out of Deer Aging
The age composition of a population can tell us a lot of useful information. In whitetail deer (Odocoileus virginianus), age data provides information about deer herd characteristics, harvest or mortality pressure on a specific age group, and general progress of a wildlife management program overall. A common way to age deer is through tooth wear and replacement. Let’s chew into this technique.
As a determined undergraduate, I voluntarily participated in a few of the DNR’s attempts to collect age and sex data of whitetail deer through in-person registration. Those data were collected from hunters pulling into the local gas station to show off (and ultimately register) their deer. Polite small talk was usually cut off by the sight of my clipboard, knife, flashlight, and jaw spreader. With a cheery smile I’d ask, “May we collect some information about the age and sex of your deer for management purposes?”. Most hunters gladly gave us the chance to examine their deer, but every now and then a trophy buck would pull in– and we knew better than to ask. Why? Because aging deer by tooth wear and replacement requires spreading (and sometimes cutting) the jaw and cheek to get a better look at the back teeth. In-person registration is no longer done, nowadays the DNR gets aging data from deer processors and CWD sampling.
Although aging deer from tooth wear and replacement has its limitations, it is the quickest and cheapest way to determine the age of a deer. It requires determining which teeth are present in the jawbone and how worn those teeth are. The data determine which of the following age classes a deer falls into: fawn (younger than 1 year), yearling (1-1.5 years), or adult (categorized as 2, 3, 4-5, 6-8, 9-11, or 12+ years).
Fawns usually have only three or four fully erupted teeth along each side of their jaw. The first three are temporary premolars and are often called “milk teeth”. Deer are born with these teeth fully erupted in place (unlike humans). It is important to note that the third premolar has three cusps. A deer with only three or four fully erupted teeth along the jaw is a fawn (Image A).
Yearlings are described as approximately 1.5 years old in the fall and generally have six fully erupted teeth on each side of the jaw. The third premolar is worn down by now but should still only have three cusps as it has not yet been replaced by a permanent tooth (Image B).
At 18-19 months old the temporary premolars (first and second premolars) have been replaced by permanent premolars and the third premolar has now become permanent with only two cusps. A deer with six fully erupted teeth along the jaw is a yearling (Image C).
Adult deer are 2.5 years and older. They will have six fully erupted teeth along each side of the jaw: three permanent premolars and three permanent molars. At this point, it is no longer as simple as counting the teeth and cusps. It is going to take a sharp eye to observe the amount of tooth wear on the teeth. Over time, teeth wear down increasing the width of the dentin exposed along each cusp. Deer older than yearlings are aged through wear of the cusps closest to the tongue on the cheek teeth. For 2.5 years and older, the third premolar is stained. The fourth tooth shows little wear, having a distinct point, and the dentine is thinner than the white enamel. As the deer ages, the cusp points will be worn down and the teeth will become relatively flat (Image D).
By no means am I an expert in aging deer. As you may now understand, learning how to age deer using tooth wear and replacement is not a one-day deal. Like everything, it takes practice. While we only took a bite out of deer aging, years of training and practice can allow researchers (and undergraduate volunteers) to age a deer down to the exact year. As mentioned, using tooth wear and replacement isn’t the most accurate technique for aging deer, but it is the most hands-on (and fun) approach.
Using Snapshot’s Bird Photos in New Ways
The following piece was written by OAS Communications Coordinator Ryan Bower for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link.
A male American woodcock stretches his wings skyward in a courtship display, a great-horned owl strikes an unknown target on the forest floor and a male northern cardinal duteously feeds his newly fledged young.
These are moments in the lives of birds captured by Snapshot Wisconsin trail camera photos. Until recently, however, many of these avian images were hidden within the Snapshot Wisconsin dataset, waiting to be uncovered by a team of bird enthusiasts. Unlike how they normally watch birds, from behind a pair of binoculars, this time they were behind a keyboard.
When Snapshot volunteers classify an image, they normally can choose from a list of around 40 wildlife species. Only five of these species are among Wisconsin’s 250 regular bird species: wild turkey, ruffed grouse, ring-necked pheasant, sandhill crane, and the endangered whooping crane. These five species are options on the list because they either are of special management interest within the Wisconsin DNR or are easier to detect by Snapshot Wisconsin cameras.
The rest of the bird photos are classified into a catchall group, called “Other Bird.” Until recently, the “Other Bird” images were considered incidental images, but the increasing size of this category caught the attention of the Snapshot Wisconsin team. In fact, “Other Bird” is the second most common classification of the six bird categories, only second to Wild Turkey (Figure 1, Panel A), which comprises over a quarter of all bird photos.
The team reached out to the Wisconsin DNR’s Bureau of Natural Heritage Conservation (NHC) to brainstorm ideas on how to leverage the “Other Bird” dataset, which had amassed 150,000 images at the time and was still growing.
Planting A Seed Of Collaboration
During their discussion with the NHC, the idea was brought up that these “Other Bird” images could contribute to the Wisconsin Breeding Bird Atlas II (WBBA II). The WBBA II is an enormous, multi-year field survey to document breeding birds and their distribution across the state. Information like the frequency of breeding and which areas birds are breeding in help the DNR see changes in breeding status for many bird species. This information can also be compared to data from the previous survey (from 1995 to 2000) and sets a benchmark for future comparisons as well.
The current survey uses data collected from between 2015 and 2019. Coincidentally, the earliest Snapshot images are from 2015 as well, so the dates of the survey aligned quite well. This collaboration seemed like a good fit.
However, there are some important differences between data collected from birding in the field and from images captured by Snapshot trail cameras. For example, many birds spend much of their time in the canopy, outside the camera’s field of view. Additionally, birders often use sound cues to identify signs of breeding in the field. Trail camera images do not contain these types of breeding cues. Lastly, certain breeding behaviors can be too fleeting to observe from a set of three images.
The team wasn’t sure yet if the trail camera photos would truly contribute much to the WBBA II.
A Collaboration Was Born
Members of the Snapshot Wisconsin and NHC teams ran a test of the “Other Bird” photos. They reviewed a small, random subset of images and learned that many of the birds could be identified down to the species level. The teams also found enough evidence of breeding, such as sightings in a suitable habitat (for breeding) or the presence of recently fledged young. Both teams decided to go ahead with the collaboration and see what they could find.
The full dataset was sent to a special iteration of Zooniverse, called the Snapshot Wisconsin Bird Edition, and birders began classifying. All of the “Other Bird” images were classified down to the species level, as well as assigning a breeding code to each image. In just over a year, the large collection of bird photos was classified, thanks to some dedicated volunteers.
The NHC’s Breeding Bird Atlas Coordinator, Nicholas Anich, extracted these new records and added them to the WBBA II. The atlas utilizes a statewide survey block system that is based on a preexisting grid from the United States Geological Survey. The survey block system requires that certain blocks be thoroughly surveyed in order for the atlas to have adequate statewide coverage, and many of the new Snapshot data points contributed to these priority survey blocks. Anich said, “[The Snapshot data] will be valuable information for the WBBA II, and we even discovered a few big surprise species, [such as] Spruce Grouse, Western Kingbird, and Whooping Cranes.”
In addition to these rare species, many of the high-value classifications were what Anich described as breeding code “upgrades.” The observed species already had been recorded in a given block, but the photos showed stronger evidence of breeding than had previously been reported. For example, an adult of a given species may have already been spotted in the area during the breeding season, but a photo showed a courtship display. The courtship display is stronger proof of breeding in the area than a single adult sighting.
How Useful Were the Snapshot Photos?
Both the (in-person) birding efforts and the trail camera photos picked up species that the other did not, so both approaches brought different strengths to the table.
One of the strengths of the trail cameras was that they are round-the-clock observers, able to pick up certain species that the in-person birding efforts missed. Anich said he noticed that nocturnal species (American Woodcock and Barred Owl) and galliforms (Wild Turkey, Ruffed Grouse) were more common in the Snapshot dataset than reported by the birders in the field, in certain areas at least. “Running into gamebirds was a bit the luck of the draw,” Anich said.
Both Anich and the Snapshot team agreed that the trail cameras were best used in conjunction with in-person surveys, rather than a substitute for each other because they each observed a different collection of species.
Insights Into The “Other Bird” Category
As a bonus for anyone who is interested in this project, the Snapshot team analyzed the photos classified for the WBBA II and created an infographic of the orders and families included. The photos included were captured between 2015 and 2019.
An immediate trend the team saw was that many of the birds were from species with larger body sizes, ground-dwelling species and species that spend time near or on the ground. For example, Anseriformes (ducks and geese) and Pelecaniformes (herons and pelicans) are the second and third most common order in the “Other Bird” category. The next most observed groups include woodpeckers, hawks, eagles, owls and shorebirds. While these birds may not spend all of their time near the ground, food sources for these species are often found in the stratum, an area where most trail cameras are oriented.
It was interesting that the most common order (comprising over half of the “Other Bird” classifications) was from the bird order Passeriformes (perching birds or songbirds). This order does not initially appear to fit the trend of ground-dwelling or larger-bodied birds. However, closer inspection revealed that the most common families in this order did fit the trend. For example, Turdidae (thrushes, especially American Robins), Corvidae (crows, ravens and jays) and Icteridae (blackbirds and grackles) comprised much higher percentage of the photos than any other families.
Thanks To Everyone Who Helped Classify Bird Photos On Zooniverse!
Overall, the Snapshot Wisconsin Bird Edition project was a huge success. In total, 154 distinct bird species were identified by nearly 200 volunteers, and over 194,000 classifications were made. The Snapshot Wisconsin and WBBA II teams extend a huge thank you to the Zooniverse volunteers who contributed their time and expertise to this project. The team was happy to see such strong support from the Wisconsin birding community, as well as from around the globe.
If you weren’t able to help with this special project, stay tuned for other unique opportunities to get involved as Snapshot continues to grow and use its data in new ways. If you contributed to the project, reach out to the Snapshot team and let them know what your favorite species to classify was.
Population Dynamics for Tracking Wildlife Populations Through Time
In wildlife conservation and management, population estimates are highly desired information and tracking them gives important insights about the health and resilience of a population through time. For example, Wisconsin Department of Natural Resources (WI DNR) annually estimates the size of the deer population in more than 80 Deer Management Units (roughly the size of counties). Fun fact – Snapshot Wisconsin contributes data on deer fawn-to-doe ratios to make these population estimates possible.
Annual population growth can be estimated by dividing the population estimate in the current year by the population estimate in the previous year (we call this growth rate lambda). A lambda = 1 is a stable population, a lambda < 1 is a declining population, and a lambda > 1 is a growing population.
What leads to the stability, growth, or decline of a population is the foundation of population dynamics. Population dynamics are a way to understand and describe the changes in wildlife population numbers and structure through time. The processes for growth are births and immigration into the population, and the processes for decline are deaths and emigration away from the population, which leads to the following formula at the heart of population dynamics:
Population size this year = Population size last year + births – deaths + immigrants – emigrants
In established wildlife populations we often focus solely on the births (called recruitment) and deaths within a wildlife population and assume immigration and emigration are equal and therefore cancel each other out.
For deer, the birth part of the equation is captured by those fawn-to-doe ratios mentioned earlier, and the death portion is estimated as a combination of mortality sources. One source is deer harvest, and because Wisconsin requires registration of harvested deer, we have a pretty good understanding of this mortality source. Other mortality sources are from natural causes and are best assessed through radio-collaring and tracking deer through their lifetimes.
Bobcat and fisher are two other Wisconsin species whose births and deaths are estimated annually. For these species, the recruitment into each population is estimated from our understanding of how many kittens (bobcat young) and kits (fisher young) are born into the population. The data come from the reproductive tracts of harvested females. The reproductive tracts contain scars for each placenta that was attached, thereby providing information on pregnancy rates and litter sizes. Similar to deer, information on mortality in these population comes from registered harvest and estimates of other non-harvest sources of mortality collected from radio-collaring research studies.
We are developing ways for Snapshot Wisconsin to contribute to our understanding of wildlife population dynamics. A real value of Snapshot Wisconsin is that it tracks all types of wildlife. For each species, we can develop metrics that will help us better track its population dynamics, and therefore gain a better understanding of the current status and trajectories of our wildlife populations.
One of these metrics is the proportion of cameras that capture a photo of a species within a time and spatial area. We can treat this metric as an index to population size, which is very useful for tracking populations across space and time. If we see a trend in the proportion of cameras in some part of the state showing an increase or decrease in this metric, that gives us information about the distribution and movement of species. For example, the southern border of fisher distribution in Wisconsin (currently around the center of the state) has been thought to be shifting further south. This metric can help us document when and where this shift may be occurring. This metric is now tracked for 19 Wisconsin species on the Snapshot Wisconsin data dashboard.
In the following graphics, you can see the proportion of trail cameras detecting bobcat in each ecological landscape of Wisconsin in 2017, 2018 and 2019. The patterns are consistent across these three years and show the distribution of bobcats is across two-thirds of the state. We will be tracking this metric and others for bobcats, as well as for other Wisconsin species.
The power of Snapshot Wisconsin is just beginning to emerge as we are collecting consistent, year-round, and multi-year data in this effort. Thanks to all of our volunteers who help make this possible!
What Makes Data “Good”?
The following piece was written by Snapshot Wisconsin’s Data Scientist, Ryan Bemowski.
Have you ever heard the term “Data doesn’t lie”? It’s often used when suggesting a conclusion based on the way scientific data tells a story. The statement is true, raw data is incapable of lying. However, data collection, data processing, data presentation and even the interpretation can be skewed or biased. Data is made “good” by understanding its collection, processing, and presentation methods while accounting for their pitfalls. Some might be surprised to learn it is also the responsibility of the consumer or observer of the data to be vigilant while making conclusions based on what they are seeing.
Thanks to the data collection efforts of more than 3,000 camera host volunteers over 5 years, Snapshot Wisconsin has amassed over 54,000,000 photos. Is all this data used for analysis and presentations? The short answer is, not quite. Snapshot Wisconsin uses a scientific approach and therefore any photos which do not follow the collection specifications are unusable for analysis or presentation. Under these circumstances, a certain amount of data loss is expected during the collection process. Let’s dive more into why some photos are not usable in our data analysis and presentations.
When data is considered unusable for analysis and presentation, corrections are made during the data processing phase. There are numerous steps in processing Snapshot Wisconsin data, and each step may temporarily or permanently mark data as unusable for presentation. For example, a camera which is baited with food, checked too frequently (such as on a weekly basis), checked too infrequently (such as once a year), or in an improper orientation may lead to permanently unusable photos. This is why it is very important that camera hosts follow the setup instructions when deploying a camera. The two photo series below show a proper camera orientation (top) and an improper camera orientation (bottom). The properly oriented camera is pointed along a flooded trail while the improperly oriented camera is pointed at the ground. This usually happens at no fault of the camera host due to weather or animal interaction but must be corrected for the photos to be usable for analysis and presentation.
In another case, a group of hard to identify photos may be temporarily marked as unusable. Once the identity of the species in the photo is expertly verified by DNR staff, they are used for analysis and presentation.
Usable data from the data processing phase can be analyzed and presented. The presentation phase often filters down the data to a specific species, timeframe, and region. With every new filter, the data gets smaller. At a certain point the size of the data can become too small and introduces an unacceptably high potential of being misleading or misinterpreted. In the Snapshot Wisconsin Data Dashboard, once the size of the data becomes too small to visualize effectively it is marked as “Insufficient Data.” Instead, this data is being used for other calculations where enough data is present but cannot reliably be presented on its own.
Let’s use the Data Dashboard presence map with deer selected as an example. The photo on the left contains 5,800,000 detections. A detection is a photo event taken when an animal walks in front of a trail camera. What if we were to narrow down the size of the data that we are looking at by randomly selecting only 72 detections, one per county? After taking that sample of one detection per county, only 12 of the detections had deer in them, as shown by the photo on the right. The second plot is quite misleading since it appears that only 12 counties have detected a deer. When data samples are too small, the data can easily be misinterpreted. This is precisely why data samples that are very small are omitted from data presentations.
There are a lot of choices to make as presentations of data are being made. We make it a priority to display as much information and with as much detail as possible while still creating reliable and easily interperatable visualizations.
In the end, interpretation is everything. It is the responsibility of the observer of the data presentation to be open and willing to accept the data as truth, yet cautious of various bias and potential misinterpretations. It is important to refrain from making too many assumptions as a consumer of the presentation. For example, in the Snapshot Wisconsin Data Dashboard detection rates plot (shown below), cottontails have only a fraction of the detections that deer have across the state. It is quite easy to think “The deer population in Wisconsin is much larger than the cottontail population,” but that would be a misinterpretation regardless of how true or false the statement may be.
Remember, the Snapshot Wisconsin Data Dashboard presents data about detections from our trail cameras, not overall population. There is no data in the Snapshot Wisconsin Data Dashboard which implies that one species is more populous than any other. Detectability, or how likely an animal is to be detected by a camera, plays a major role in the data used on the Snapshot Wisconsin Data Dashboard. Deer are one of the largest, most detectable species while the smaller, brush dwelling cottontail is one of the more difficult to detect.
So, is the data “good”?
Yes, Snapshot Wisconsin is full of good data. If we continue to practice proper data collection, rigorous data processing, and mindful data presentations Snapshot Wisconsin data will continue to get even better. Interpretation is also a skill which needs practice. While viewing any data presentation, be willing to accept presented data as truth but also be vigilant in your interpretation so you are not misled or misinterpret the data presentations.
Individuals Matter Too! – When You Can ID Them
The following piece was written by OAS Communications Coordinator Ryan Bower for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link.
Elk are similar to deer in that they lack identifiable markings most of the time. This makes it hard to know whether an elk in one photo is the same elk that appears in another photo. However, some elk in Wisconsin have uniquely numbered collars, making it possible to identify one individual elk from another.
Using these collars, researchers can piece together all the Snapshot photos of that elk and follow its movement through time. Knowing that the elk in two different photos is the same individual holds a special type of power for researchers and tells them extra information about the size of the elk herd. That is, if the researchers can leverage that additional information.
Glenn Stauffer, Natural Resources Research Scientist within the Office of Applied Science, is leading the initiative to identify individual elk and use these data to improve the annual elk population estimate. Stauffer said, “I was approached because of my quantitative modeling experience to evaluate different ways of using the elk photographs as data to fit an elk model. [Collectively,] the various models I and others have worked on provide a range of options to estimate the [elk] population size and to evaluate how reliable the models are.”
To better understand the significance of Stauffer’s work, it helps to know how elk have historically been counted in Wisconsin.
“Long before I came onto the scene, the primary way of counting elk was to go out and count them all,” said Stauffer. This method requires extensive time in the field and considerable local knowledge about where elk groups often hang out. Researchers could count some elk by their numbered collars, but they also needed to know how many uncollared elk were in each group. The elk herd grew over the years, and more and more elk did not have identifiable collars. This added another challenge for researchers who were trying to count all the unmarked elk (and make sure they weren’t double counting any of them).
Since the estimate of the elk population size still needed to include an unknown number of these unmarked individuals, the DNR started experimenting with models that didn’t require individual identifications. These new models were also a boon because the herd was reaching too large a size to efficiently collar. It was becoming too much of a time investment and was expensive.
Instead, these models are based on images from the Snapshot camera grid, as discussed in the previous article, but even these camera-based models had room for improvement. Thus, Stauffer began researching a model that incorporated the best of both approaches: a model that was based on the camera data but still incorporated limited individual identification back into the model.
Stauffer looked into a variety of models but zeroed in on one type of model in particular. Stauffer explained that this model belongs to a class of models called spatial mark resight models. Spatial mark resight models combine the best of both marked and unmarked models. Stauffer’s model identifies individuals by their collars but also makes inferences from the photos of unmarked elk at the same time.
Spatial mark resight models also relax a major assumption made by the previous camera model, the closure assumption. “This assumption states that the number of elk at a particular camera location doesn’t change from one encounter occasion to the next, and it is clearly violated. Elk are wandering from camera to camera,” said Stauffer. Stauffer’s hybrid model relaxes the closure assumption and attempts to figure out the minimum number of distinct elk it can identify from the pictures.
Collared elk are often easy to identify in the photos. These collared elk are given the ID assigned to their respective collar number so that all photos of a particular elk share the same ID. The model also attempts to assign IDs to uncollared elk in the photos. The model uses probabilistics to assigns IDs to all remaining elk – either uncollared elk or unknown elk (because the collar or the collar number isn’t visible in the photo) – based on characteristics visible in each photo.
Fortunately, Stauffer’s model uses as much information as it can get from the photos when assigning IDs. For example, if one photo is of a calf and another photo is of a cow, then the model won’t assign the same ID to these animals. After all, we know those are two distinct elk, not one. Similarly, a marked but unidentified elk with one collar type can’t be the same as another unidentified elk with a different collar type. The model even uses spatial data to differentiate unmarked elk from two different photos. For example, photos at two locations close together might be from the same elk, but photos from two distant locations probably represent two different elk.
Capitalizing on all the information available in the Snapshot photos, the model makes an estimate of how many elk are likely in Wisconsin’s elk herds. As the elk herds continue to grow, this modeling approach helps estimate the elk population and hopefully saves the DNR time and money.
How well does the model work?
“[Technically,] the spatial count model doesn’t require any information about individual IDs, but it performs pretty poorly without them,” said Stauffer. “There is a series of papers from about 2013 on that shows if you add information about individuals to spatial counts, you can really improve the accuracy and precision of the spatial model.”
“Theoretically, this makes the model estimates more precise,” said Stauffer. To check, Stauffer collaborated with a colleague to run a bunch of simulations with known, perfect data, and the model worked reasonably well. These simulation results are encouraging because the model wasn’t massively overpredicting or underpredicting the number of elk in the herds, both of which could have management implications for elk.
When asked if identifying individuals from photographs is worth the extra effort, Stauffer said, “Working with models that don’t require individual IDs still requires considerable time to classify photos. Identifying individuals is only a little bit more work on top of that. In general, when you can’t meet the assumptions of a model, then it is worth getting individual identifications, if you can.”
Just how much additional effort should be put into individual IDs? Stauffer believes part of the answer comes from asking what other information can be obtained from the collars. “If we are already putting the collars on those we capture or release, then we might as well get as much out of them as possible, such as through using photographs [like Snapshot does],” said Stauffer.
Incorporating Another Year
After the Snapshot team finishes assembling the 2020 elk dataset, a large dataset comprised of the data from all the Snapshot photos of elk in 2020, Stauffer will run his model using this new dataset and generate an estimate of last year’s final elk population. Stauffer’s estimate will be closely compared to other estimates generated by the previous camera-based models and through collaring efforts alone to see how well each approach performs.
Stauffer took a minute to reflect on his work so far with the elk population estimate. Stauffer said, “The modeling process has been really rewarding, diving into this topic in a depth that I would not have done if I did not have this Snapshot photo dataset to work with. The simulation also went well. It illustrated that the model works the way we claim it works, which is good. Fitting the model to the elk data is mostly encouraging, but it shows that there are situations where it doesn’t do as good of a job as we hoped. Specifically, for calves, it still needs to be fine-tuned.”
From physically counting elk to modeling counts of only unknown individuals to modeling counts of both unknown and known individuals, Wisconsin’s approach to estimating elk abundance has evolved through time. Chances are, as the composition and distribution of the herd changes in the coming years, the approach will evolve even more. But for the next few years, Stauffer’s work will help direct how we count elk now.