Archive | Research RSS for this section

Using Snapshot’s Bird Photos in New Ways

The following piece was written by OAS Communications Coordinator Ryan Bower for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link.

A male American woodcock stretches his wings skyward in a courtship display, a great-horned owl strikes an unknown target on the forest floor and a male northern cardinal duteously feeds his newly fledged young.

These are moments in the lives of birds captured by Snapshot Wisconsin trail camera photos. Until recently, however, many of these avian images were hidden within the Snapshot Wisconsin dataset, waiting to be uncovered by a team of bird enthusiasts. Unlike how they normally watch birds, from behind a pair of binoculars, this time they were behind a keyboard.

When Snapshot volunteers classify an image, they normally can choose from a list of around 40 wildlife species. Only five of these species are among Wisconsin’s 250 regular bird species: wild turkey, ruffed grouse, ring-necked pheasant, sandhill crane, and the endangered whooping crane. These five species are options on the list because they either are of special management interest within the Wisconsin DNR or are easier to detect by Snapshot Wisconsin cameras.

The rest of the bird photos are classified into a catchall group, called “Other Bird.” Until recently, the “Other Bird” images were considered incidental images, but the increasing size of this category caught the attention of the Snapshot Wisconsin team. In fact, “Other Bird” is the second most common classification of the six bird categories, only second to Wild Turkey (Figure 1, Panel A), which comprises over a quarter of all bird photos.

The team reached out to the Wisconsin DNR’s Bureau of Natural Heritage Conservation (NHC) to brainstorm ideas on how to leverage the “Other Bird” dataset, which had amassed 150,000 images at the time and was still growing.

Great horned owl on a log

Planting A Seed Of Collaboration

During their discussion with the NHC, the idea was brought up that these “Other Bird” images could contribute to the Wisconsin Breeding Bird Atlas II (WBBA II). The WBBA II is an enormous, multi-year field survey to document breeding birds and their distribution across the state. Information like the frequency of breeding and which areas birds are breeding in help the DNR see changes in breeding status for many bird species. This information can also be compared to data from the previous survey (from 1995 to 2000) and sets a benchmark for future comparisons as well.

The current survey uses data collected from between 2015 and 2019. Coincidentally, the earliest Snapshot images are from 2015 as well, so the dates of the survey aligned quite well. This collaboration seemed like a good fit.

However, there are some important differences between data collected from birding in the field and from images captured by Snapshot trail cameras. For example, many birds spend much of their time in the canopy, outside the camera’s field of view. Additionally, birders often use sound cues to identify signs of breeding in the field. Trail camera images do not contain these types of breeding cues. Lastly, certain breeding behaviors can be too fleeting to observe from a set of three images.

The team wasn’t sure yet if the trail camera photos would truly contribute much to the WBBA II.

A western kingbird flying across a prairie

A Collaboration Was Born

Members of the Snapshot Wisconsin and NHC teams ran a test of the “Other Bird” photos. They reviewed a small, random subset of images and learned that many of the birds could be identified down to the species level. The teams also found enough evidence of breeding, such as sightings in a suitable habitat (for breeding) or the presence of recently fledged young. Both teams decided to go ahead with the collaboration and see what they could find.

The full dataset was sent to a special iteration of Zooniverse, called the Snapshot Wisconsin Bird Edition, and birders began classifying. All of the “Other Bird” images were classified down to the species level, as well as assigning a breeding code to each image. In just over a year, the large collection of bird photos was classified, thanks to some dedicated volunteers.

The NHC’s Breeding Bird Atlas Coordinator, Nicholas Anich, extracted these new records and added them to the WBBA II. The atlas utilizes a statewide survey block system that is based on a preexisting grid from the United States Geological Survey. The survey block system requires that certain blocks be thoroughly surveyed in order for the atlas to have adequate statewide coverage, and many of the new Snapshot data points contributed to these priority survey blocks. Anich said, “[The Snapshot data] will be valuable information for the WBBA II, and we even discovered a few big surprise species, [such as] Spruce Grouse, Western Kingbird, and Whooping Cranes.”

In addition to these rare species, many of the high-value classifications were what Anich described as breeding code “upgrades.” The observed species already had been recorded in a given block, but the photos showed stronger evidence of breeding than had previously been reported. For example, an adult of a given species may have already been spotted in the area during the breeding season, but a photo showed a courtship display. The courtship display is stronger proof of breeding in the area than a single adult sighting.

A spruce grouse in a field

How Useful Were the Snapshot Photos?

Both the (in-person) birding efforts and the trail camera photos picked up species that the other did not, so both approaches brought different strengths to the table.

One of the strengths of the trail cameras was that they are round-the-clock observers, able to pick up certain species that the in-person birding efforts missed. Anich said he noticed that nocturnal species (American Woodcock and Barred Owl) and galliforms (Wild Turkey, Ruffed Grouse) were more common in the Snapshot dataset than reported by the birders in the field, in certain areas at least. “Running into gamebirds was a bit the luck of the draw,” Anich said.

Both Anich and the Snapshot team agreed that the trail cameras were best used in conjunction with in-person surveys, rather than a substitute for each other because they each observed a different collection of species.

OtherBird_infographic

Insights Into The “Other Bird” Category

As a bonus for anyone who is interested in this project, the Snapshot team analyzed the photos classified for the WBBA II and created an infographic of the orders and families included. The photos included were captured between 2015 and 2019.

An immediate trend the team saw was that many of the birds were from species with larger body sizes, ground-dwelling species and species that spend time near or on the ground. For example, Anseriformes (ducks and geese) and Pelecaniformes (herons and pelicans) are the second and third most common order in the “Other Bird” category. The next most observed groups include woodpeckers, hawks, eagles, owls and shorebirds. While these birds may not spend all of their time near the ground, food sources for these species are often found in the stratum, an area where most trail cameras are oriented.

It was interesting that the most common order (comprising over half of the “Other Bird” classifications) was from the bird order Passeriformes (perching birds or songbirds). This order does not initially appear to fit the trend of ground-dwelling or larger-bodied birds. However, closer inspection revealed that the most common families in this order did fit the trend. For example, Turdidae (thrushes, especially American Robins), Corvidae (crows, ravens and jays) and Icteridae (blackbirds and grackles) comprised much higher percentage of the photos than any other families.

Thanks To Everyone Who Helped Classify Bird Photos On Zooniverse!

Overall, the Snapshot Wisconsin Bird Edition project was a huge success. In total, 154 distinct bird species were identified by nearly 200 volunteers, and over 194,000 classifications were made. The Snapshot Wisconsin and WBBA II teams extend a huge thank you to the Zooniverse volunteers who contributed their time and expertise to this project. The team was happy to see such strong support from the Wisconsin birding community, as well as from around the globe.

If you weren’t able to help with this special project, stay tuned for other unique opportunities to get involved as Snapshot continues to grow and use its data in new ways. If you contributed to the project, reach out to the Snapshot team and let them know what your favorite species to classify was.

Population Dynamics for Tracking Wildlife Populations Through Time

In wildlife conservation and management, population estimates are highly desired information and tracking them gives important insights about the health and resilience of a population through time. For example, Wisconsin Department of Natural Resources (WI DNR) annually estimates the size of the deer population in more than 80 Deer Management Units (roughly the size of counties). Fun fact – Snapshot Wisconsin contributes data on deer fawn-to-doe ratios to make these population estimates possible.

A doe and a fawn

Annual population growth can be estimated by dividing the population estimate in the current year by the population estimate in the previous year (we call this growth rate lambda). A lambda = 1 is a stable population, a lambda < 1 is a declining population, and a lambda > 1 is a growing population.

A g

Examples of graphs showing stable, growing, and declining populations based on their lambda value.

What leads to the stability, growth, or decline of a population is the foundation of population dynamics. Population dynamics are a way to understand and describe the changes in wildlife population numbers and structure through time. The processes for growth are births and immigration into the population, and the processes for decline are deaths and emigration away from the population, which leads to the following formula at the heart of population dynamics:

Population size this year = Population size last year + births – deaths + immigrants – emigrants

A turkey hen with five poults

These young turkey poults would contribute to the number of “births” in a population estimate for turkeys.

A red fox with a rabbit in its mouth

This cottontail fell prey to a red fox. Animals that have died would contribute to the number of “deaths” in the population estimate for their species.

In established wildlife populations we often focus solely on the births (called recruitment) and deaths within a wildlife population and assume immigration and emigration are equal and therefore cancel each other out.

For deer, the birth part of the equation is captured by those fawn-to-doe ratios mentioned earlier, and the death portion is estimated as a combination of mortality sources. One source is deer harvest, and because Wisconsin requires registration of harvested deer, we have a pretty good understanding of this mortality source. Other mortality sources are from natural causes and are best assessed through radio-collaring and tracking deer through their lifetimes.

A deer with a radio collar around its neck.

This deer is part of the WI DNR’s radio-collar tracking program.

Bobcat and fisher are two other Wisconsin species whose births and deaths are estimated annually. For these species, the recruitment into each population is estimated from our understanding of how many kittens (bobcat young) and kits (fisher young) are born into the population. The data come from the reproductive tracts of harvested females. The reproductive tracts contain scars for each placenta that was attached, thereby providing information on pregnancy rates and litter sizes. Similar to deer, information on mortality in these population comes from registered harvest and estimates of other non-harvest sources of mortality collected from radio-collaring research studies.

A bobcat with a radio collar around its neck.

A bobcat with a WI DNR radio-collar.

Bobcat kittens.

Bobcat kittens captured on a Snapshot Wisconsin trail camera. 

We are developing ways for Snapshot Wisconsin to contribute to our understanding of wildlife population dynamics. A real value of Snapshot Wisconsin is that it tracks all types of wildlife. For each species, we can develop metrics that will help us better track its population dynamics, and therefore gain a better understanding of the current status and trajectories of our wildlife populations.

One of these metrics is the proportion of cameras that capture a photo of a species within a time and spatial area. We can treat this metric as an index to population size, which is very useful for tracking populations across space and time. If we see a trend in the proportion of cameras in some part of the state showing an increase or decrease in this metric, that gives us information about the distribution and movement of species. For example, the southern border of fisher distribution in Wisconsin (currently around the center of the state) has been thought to be shifting further south. This metric can help us document when and where this shift may be occurring. This metric is now tracked for 19 Wisconsin species on the Snapshot Wisconsin data dashboard.

In the following graphics, you can see the proportion of trail cameras detecting bobcat in each ecological landscape of Wisconsin in 2017, 2018 and 2019. The patterns are consistent across these three years and show the distribution of bobcats is across two-thirds of the state. We will be tracking this metric and others for bobcats, as well as for other Wisconsin species.

Three maps of Wisconsin showing bobcat detections on Snapshot Wisconsin cameras.

Maps showing proportion of bobcat detections on Snapshot Wisconsin cameras in different ecological landscapes in 2017 (far left), 2018 (middle), and 2019 (far right).

The power of Snapshot Wisconsin is just beginning to emerge as we are collecting consistent, year-round, and multi-year data in this effort. Thanks to all of our volunteers who help make this possible!

What Makes Data “Good”?

The following piece was written by Snapshot Wisconsin’s Data Scientist, Ryan Bemowski. 

Have you ever heard the term “Data doesn’t lie”? It’s often used when suggesting a conclusion based on the way scientific data tells a story. The statement is true, raw data is incapable of lying. However, data collection, data processing, data presentation and even the interpretation can be skewed or biased. Data is made “good” by understanding its collection, processing, and presentation methods while accounting for their pitfalls. Some might be surprised to learn it is also the responsibility of the consumer or observer of the data to be vigilant while making conclusions based on what they are seeing.

A graphic showing how data moves from collection to processing and presentation.

Data Collection

Thanks to the data collection efforts of more than 3,000 camera host volunteers over 5 years, Snapshot Wisconsin has amassed over 54,000,000 photos. Is all this data used for analysis and presentations? The short answer is, not quite. Snapshot Wisconsin uses a scientific approach and therefore any photos which do not follow the collection specifications are unusable for analysis or presentation. Under these circumstances, a certain amount of data loss is expected during the collection process. Let’s dive more into why some photos are not usable in our data analysis and presentations.

Data Processing

When data is considered unusable for analysis and presentation, corrections are made during the data processing phase. There are numerous steps in processing Snapshot Wisconsin data, and each step may temporarily or permanently mark data as unusable for presentation. For example, a camera which is baited with food, checked too frequently (such as on a weekly basis), checked too infrequently (such as once a year), or in an improper orientation may lead to permanently unusable photos. This is why it is very important that camera hosts follow the setup instructions when deploying a camera. The two photo series below show a proper camera orientation (top) and an improper camera orientation (bottom). The properly oriented camera is pointed along a flooded trail while the improperly oriented camera is pointed at the ground. This usually happens at no fault of the camera host due to weather or animal interaction but must be corrected for the photos to be usable for analysis and presentation.

Good Data Graphic2

A properly oriented camera (top) compared to an improperly oriented camera (bottom).

In another case, a group of hard to identify photos may be temporarily marked as unusable. Once the identity of the species in the photo is expertly verified by DNR staff, they are used for analysis and presentation.

Data Presentation

Usable data from the data processing phase can be analyzed and presented. The presentation phase often filters down the data to a specific species, timeframe, and region. With every new filter, the data gets smaller. At a certain point the size of the data can become too small and introduces an unacceptably high potential of being misleading or misinterpreted. In the Snapshot Wisconsin Data Dashboard, once the size of the data becomes too small to visualize effectively it is marked as “Insufficient Data.” Instead, this data is being used for other calculations where enough data is present but cannot reliably be presented on its own.

Good Data Graphic 3

Snapshot Wisconsin Data Dashboard presence plot with over 5,800,000 detections (left) and a similar plot with only 72 detections sampled (right).

Let’s use the Data Dashboard presence map with deer selected as an example. The photo on the left contains 5,800,000 detections. A detection is a photo event taken when an animal walks in front of a trail camera. What if we were to narrow down the size of the data that we are looking at by randomly selecting only 72 detections, one per county? After taking that sample of one detection per county, only 12 of the detections had deer in them, as shown by the photo on the right. The second plot is quite misleading since it appears that only 12 counties have detected a deer. When data samples are too small, the data can easily be misinterpreted. This is precisely why data samples that are very small are omitted from data presentations.

There are a lot of choices to make as presentations of data are being made. We make it a priority to display as much information and with as much detail as possible while still creating reliable and easily interperatable visualizations.

Interpretation

In the end, interpretation is everything. It is the responsibility of the observer of the data presentation to be open and willing to accept the data as truth, yet cautious of various bias and potential misinterpretations. It is important to refrain from making too many assumptions as a consumer of the presentation. For example, in the Snapshot Wisconsin Data Dashboard detection rates plot (shown below), cottontails have only a fraction of the detections that deer have across the state. It is quite easy to think “The deer population in Wisconsin is much larger than the cottontail population,” but that would be a misinterpretation regardless of how true or false the statement may be.

A bar graph showing detections per year of the five most common species.

Remember, the Snapshot Wisconsin Data Dashboard presents data about detections from our trail cameras, not overall population. There is no data in the Snapshot Wisconsin Data Dashboard which implies that one species is more populous than any other. Detectability, or how likely an animal is to be detected by a camera, plays a major role in the data used on the Snapshot Wisconsin Data Dashboard. Deer are one of the largest, most detectable species while the smaller, brush dwelling cottontail is one of the more difficult to detect.

So, is the data “good”?

Yes, Snapshot Wisconsin is full of good data. If we continue to practice proper data collection, rigorous data processing, and mindful data presentations Snapshot Wisconsin data will continue to get even better. Interpretation is also a skill which needs practice. While viewing any data presentation, be willing to accept presented data as truth but also be vigilant in your interpretation so you are not misled or misinterpret the data presentations.

Individuals Matter Too! – When You Can ID Them

The following piece was written by OAS Communications Coordinator Ryan Bower for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link.

Elk are similar to deer in that they lack identifiable markings most of the time. This makes it hard to know whether an elk in one photo is the same elk that appears in another photo. However, some elk in Wisconsin have uniquely numbered collars, making it possible to identify one individual elk from another.

Using these collars, researchers can piece together all the Snapshot photos of that elk and follow its movement through time. Knowing that the elk in two different photos is the same individual holds a special type of power for researchers and tells them extra information about the size of the elk herd. That is, if the researchers can leverage that additional information.

Glenn Stauffer, Natural Resources Research Scientist within the Office of Applied Science, is leading the initiative to identify individual elk and use these data to improve the annual elk population estimate. Stauffer said, “I was approached because of my quantitative modeling experience to evaluate different ways of using the elk photographs as data to fit an elk model. [Collectively,] the various models I and others have worked on provide a range of options to estimate the [elk] population size and to evaluate how reliable the models are.”

Elk Herd

Identifying Individuals

To better understand the significance of Stauffer’s work, it helps to know how elk have historically been counted in Wisconsin.

“Long before I came onto the scene, the primary way of counting elk was to go out and count them all,” said Stauffer. This method requires extensive time in the field and considerable local knowledge about where elk groups often hang out. Researchers could count some elk by their numbered collars, but they also needed to know how many uncollared elk were in each group. The elk herd grew over the years, and more and more elk did not have identifiable collars. This added another challenge for researchers who were trying to count all the unmarked elk (and make sure they weren’t double counting any of them).

Since the estimate of the elk population size still needed to include an unknown number of these unmarked individuals, the DNR started experimenting with models that didn’t require individual identifications. These new models were also a boon because the herd was reaching too large a size to efficiently collar. It was becoming too much of a time investment and was expensive.

Instead, these models are based on images from the Snapshot camera grid, as discussed in the previous article, but even these camera-based models had room for improvement. Thus, Stauffer began researching a model that incorporated the best of both approaches: a model that was based on the camera data but still incorporated limited individual identification back into the model.

An antlered bull elk with a tracking collar

Stauffer’s Model

Stauffer looked into a variety of models but zeroed in on one type of model in particular. Stauffer explained that this model belongs to a class of models called spatial mark resight models. Spatial mark resight models combine the best of both marked and unmarked models. Stauffer’s model identifies individuals by their collars but also makes inferences from the photos of unmarked elk at the same time.

Spatial mark resight models also relax a major assumption made by the previous camera model, the closure assumption. “This assumption states that the number of elk at a particular camera location doesn’t change from one encounter occasion to the next, and it is clearly violated. Elk are wandering from camera to camera,” said Stauffer. Stauffer’s hybrid model relaxes the closure assumption and attempts to figure out the minimum number of distinct elk it can identify from the pictures.

Collared elk are often easy to identify in the photos. These collared elk are given the ID assigned to their respective collar number so that all photos of a particular elk share the same ID. The model also attempts to assign IDs to uncollared elk in the photos. The model uses probabilistics to assigns IDs to all remaining elk – either uncollared elk or unknown elk (because the collar or the collar number isn’t visible in the photo) – based on characteristics visible in each photo.

Fortunately, Stauffer’s model uses as much information as it can get from the photos when assigning IDs. For example, if one photo is of a calf and another photo is of a cow, then the model won’t assign the same ID to these animals. After all, we know those are two distinct elk, not one. Similarly, a marked but unidentified elk with one collar type can’t be the same as another unidentified elk with a different collar type. The model even uses spatial data to differentiate unmarked elk from two different photos. For example, photos at two locations close together might be from the same elk, but photos from two distant locations probably represent two different elk.

Capitalizing on all the information available in the Snapshot photos, the model makes an estimate of how many elk are likely in Wisconsin’s elk herds. As the elk herds continue to grow, this modeling approach helps estimate the elk population and hopefully saves the DNR time and money.

Bull Elk

How well does the model work?

“[Technically,] the spatial count model doesn’t require any information about individual IDs, but it performs pretty poorly without them,” said Stauffer. “There is a series of papers from about 2013 on that shows if you add information about individuals to spatial counts, you can really improve the accuracy and precision of the spatial model.”

“Theoretically, this makes the model estimates more precise,” said Stauffer. To check, Stauffer collaborated with a colleague to run a bunch of simulations with known, perfect data, and the model worked reasonably well. These simulation results are encouraging because the model wasn’t massively overpredicting or underpredicting the number of elk in the herds, both of which could have management implications for elk.

When asked if identifying individuals from photographs is worth the extra effort, Stauffer said, “Working with models that don’t require individual IDs still requires considerable time to classify photos. Identifying individuals is only a little bit more work on top of that. In general, when you can’t meet the assumptions of a model, then it is worth getting individual identifications, if you can.”

Just how much additional effort should be put into individual IDs? Stauffer believes part of the answer comes from asking what other information can be obtained from the collars. “If we are already putting the collars on those we capture or release, then we might as well get as much out of them as possible, such as through using photographs [like Snapshot does],” said Stauffer.

Incorporating Another Year

After the Snapshot team finishes assembling the 2020 elk dataset, a large dataset comprised of the data from all the Snapshot photos of elk in 2020, Stauffer will run his model using this new dataset and generate an estimate of last year’s final elk population. Stauffer’s estimate will be closely compared to other estimates generated by the previous camera-based models and through collaring efforts alone to see how well each approach performs.

Stauffer took a minute to reflect on his work so far with the elk population estimate. Stauffer said, “The modeling process has been really rewarding, diving into this topic in a depth that I would not have done if I did not have this Snapshot photo dataset to work with. The simulation also went well. It illustrated that the model works the way we claim it works, which is good. Fitting the model to the elk data is mostly encouraging, but it shows that there are situations where it doesn’t do as good of a job as we hoped. Specifically, for calves, it still needs to be fine-tuned.”

From physically counting elk to modeling counts of only unknown individuals to modeling counts of both unknown and known individuals, Wisconsin’s approach to estimating elk abundance has evolved through time. Chances are, as the composition and distribution of the herd changes in the coming years, the approach will evolve even more. But for the next few years, Stauffer’s work will help direct how we count elk now.

Elk Snapshots Mean Better Elk Modeling

The following piece was written by OAS Communications Coordinator Ryan Bower for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link.

At the start of every year, DNR staff begin compiling a large dataset of elk sightings from the previous calendar year, and the data, once compiled, is used to calculate the total number of elk that live in the state. This method has been a standard practice since the second reintroduction of elk to Wisconsin.

What some of you may not know is that Snapshot plays an important role in counting elk by providing sightings, particularly of bulls. In fact, Snapshot has more than 250 of its cameras (over 10% of all Snapshot cameras) dedicated to monitoring elk alone. These cameras are clustered in the three areas of the state with part of the elk herd – Clam Lake, Flambeau River and Black River Falls. These elk cameras are arranged into a grid-like pattern in each area, just like the rest of the Snapshot grid, except that the density of cameras in the elk grid is a lot higher.

A few members of the Snapshot team are among those working on the 2020 elk dataset, so the team decided to focus this newsletter on elk and how they use photos to learn about the elk herd.

An elk cow, bull, and two calves.

The Snapshot team invited Dr. Jennifer Price Tack, Large Carnivore and Elk Research Scientist and fellow scientist within the Office of Applied Science, to add her perspective to this newsletter on why Snapshot’s photos matter for elk.

The task of integrating Snapshot data into the elk model was originally the work of Joe Dittrich, who laid a solid foundation for Price Tack. Since Price Tack joined the Office of Applied Science at the end of 2019, she has been using Snapshot data to model how various quota alternatives will affect the elk herd size in the years to come.

“My research focuses on [elk] populations because populations are the scale at which we manage wildlife,” said Price Tack. Population is the starting point for all decisions that are made about managing wildlife in Wisconsin. The status of a population determines how decisions are made, policy is framed, quotas are set, permits are allocated, and so on… Population is the unit of concern for the DNR.

Price Tack continued, “While individual animals are important and make up a population, our ability to manage them breaks down some at the individual level, [simply] due to the infeasibility of monitoring individuals.”

For species like elk, which normally lack easy-to-identify markings, individual identification is often difficult. Possible, as discussed in the next article, but difficult. Thus, populations tend to be the scale of most species work at the DNR, including Price Tack’s work on elk.

As Price Tack walks us through her research on the elk population, check out the unique way that Snapshot photo data are used to monitor this large herbivore population.

Feeding Photo Data Into The Model

Photos of elk can have multiple forms of data in them, beyond just what animals are present in the picture. There is camera location data, for example, which provides information about which areas of land the elk are using and not using.

There is also movement data. The Snapshot team learned that elk calves, cows and bulls have different movement patterns and are seen at different rates throughout the year. When bulls are the most active, for example, cows tend to be less active.

The camera data also helps Snapshot determine a calf-to-cow ratio for elk. Although, it isn’t as simple as dividing all the calf photos by the number of cow photos. Cows move around more than calves do and are more detectable in photos, given their larger size. Using knowledge about calf/cow visibility, calves and cows are modeled separately, and those numbers are then used to calculate the calf-cow ratio for elk.

“I remember first learning about Snapshot and thinking it is such a cool resource! There is so much you can do with camera data.” said Price Tack. “I have experience working in other systems that use camera data, so I know [firsthand] that using camera data has a lot of benefits” – benefits like providing many forms of data at once and being more cost-effective than extensive collaring. “I wanted to tap in and work with these folks.”

Elk herd walking through the snow

Price Tack mentioned that she even had the Snapshot logo in her interview presentation. She was already thinking about how to get the most out of Snapshot’s camera data.

“Now that I’m here, my focus is on filling research needs to inform decisions,” Price Tack continued. “[Our research] is going to be critical to helping wildlife management and species committees make informed decisions for elk, such as deciding elk harvest quotas in the upcoming years. Snapshot data is one tool we can use to fill those research needs. It is available, and I’d like to use it as much as feasible.”

Besides estimating the population of the elk classes (e.g. calves, cows and bulls), Snapshot data is currently being used to help estimate population parameters and help us understand what is happening with the population. Population parameters are estimates of important characteristics of the population, such as recruitment (birth rate), mortality (death rate) and survival rates of different elk classes within the population.

Price Tack’s model uses matrix algebra to take an initial elk population size and projects the population into the future, using what we know about elk population parameters. In other words, the model can predict how large the elk population is likely to grow in the years to come. There is natural variation however, that can cause some years to be unpredictably good or bad for elk, so the model needs to be updated each year to keep its accuracy as high as possible.

Thanks to Snapshot’s camera data, we have a system in place to calculate each year’s population parameters and continue updating the model each year. This should help us catch if anything of concern happens to the population and (hopefully) fix it before it becomes a threat.

Improving the Elk Model

Another of Price Tack’s tasks related to Snapshot is improving the elk model. Many of the improvements Price Tack is researching aim to address data collection for a larger population.

The elk population was very small when the DNR first reintroduced elk to the state in 1995 and again in 2015. The DNR used intensive monitoring methods back then to collar (and track) every elk in the herd, since intensive methods are best suited for small populations. However,with the elk herd doing so well, it won’t be long before a different approach is needed. The DNR wants to transition to a method more appropriate for a larger elk population.

Currently, the DNR is early in the process of ramping up non-invasive, cost-effective methods like Snapshot monitoring and toning down the collaring effort. Although, this transition will take time, happening over the next few years.

Price Tack also mentioned another modification under consideration. Price Tack and the Snapshot team are looking into repositioning some of the cameras within the elk grid. Currently, the elk grid doesn’t perfectly align with where the elk are congregating. There are a few cameras outside of the elk range that don’t see any elk, and there are edges of the elk’s range that extend beyond where the cameras are deployed. Repositioning the cameras should mean more elk pictures, which means more elk data.

Elk calf

The Frontier of Camera Monitoring

The role of Snapshot in monitoring elk is evolving, and Price Tack and the Snapshot team believe it is for the better. While they can’t guarantee that Snapshot will always play a central role in collecting data on elk, Snapshot will fill this role for the next few years at least.

Price Tack said, “This is the frontier of using camera trap data for elk. Every year, new approaches to using camera trap data are being developed. That has me excited that, even though we don’t have all the answers now, more opportunities may be on the horizon.”

You can also get more elk-related news by signing up for the Elk in Wisconsin topic on GovDelivery. Joining this email list (or others like it, including a GovDelivery topic for Snapshot Wisconsin) is the best way to make sure you don’t miss out on news you are interested in.

What Happens to Photos Once Uploaded?

The following piece was written by OAS Communications Coordinator Ryan Bower for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link.

Since Snapshot reached 50 million photos, the Snapshot team felt it was a good time to address one of the most asked questions about photos: what happens to photos once they are uploaded by volunteers? At first, the process seems complicated, but member of the Snapshot team, Jamie Bugel, is here to walk us through the process, one step at a time.

Bugel is a Natural Resources Educator and Research Technician at the DNR, but she works on the volunteer side of Snapshot. Bugel said, “I mainly help volunteers troubleshoot issues with their equipment or with their interactions with the MySnapshot interface. I am one of the people who answer the Snapshot phone, and I help update the user interface by testing functionality. There is also lots of data management coordination on the volunteer side of the program that I help with.”

Bugel listed off a few of the more common questions she and the rest of the Snapshot team get asked, including who reviews photos after the initial classification, what happens to the photos that camera hosts can’t identify and how do mistakes get rectified. “We get asked those [questions] on a weekly to daily basis,” said Bugel.

It Starts With a Three-Month Check and an Upload

Every three months, trail camera hosts are supposed to swap out the SD card and batteries in their trail camera. At the same time, volunteers fill out a camera check sheet, including what time of day they checked the camera, how many photos were on the SD card and if there was any equipment damage.

“You should wait at least three months to check their camera, because you won’t disturb the wildlife by checking more often. We want to view the wildlife with as minimal human interference as possible,” said Bugel. “At the same time, volunteers should check [their camera] at least every three months, because batteries don’t last much longer than three months. Checking this often is important to avoid missing photos.”

After the volunteer does their three-month check, they bring their camera’s SD card back to their home and enter the information on their camera check sheet into their MySnapshot account and upload their photos.

Bugel said it can take anywhere from 4 to 48 hours for the photos to appear in the volunteer’s MySnapshot account. Fortunately, the server will send an email when the photos are ready, so volunteers don’t have to keep checking. Volunteers can start classifying their photos after receiving the email.

A fisher walking through the snow

Initial Classification By Camera Hosts

The first round of classification is done by the trail camera hosts. The returned photos will sit in the Review Photos section of their MySnapshot account while the host classifies the photos as Human, Blank or Wildlife. The wildlife photos are also further classified by which species are present in the photo, such as beaver, deer or coyote.

This initial classification step is very important for protecting the privacy of our camera hosts, as well as helps on the back end of data processing. Over 90% of all photos are classified at this step by the camera hosts. When they are done classifying photos, they click “review complete,” and the set of photos is sent to the Snapshot team for the second round of classification.

Staff Review

The second round of classification is the staff review. Members of the Snapshot team review sets of photos to verify that all human or blank photos have been properly flagged. “For example, a deer photo may include a deer stand in the background. That type of photo will not go to Zooniverse because there is a human object in the photo,” said Bugel. Fortunately, nearly all human photos are taken during the initial camera setup or while swapping batteries and SD card, so they are usually clumped and easy to spot.

The second reason that staff review photos after the initial classification is for quality assurance. Since some animal species are tricky to correctly classify, someone from the Snapshot team reviews sets to verify that the photos were tagged with the correct species. This quality assurance step helps rectify mistakes. “Sometimes there are photos classified as blank or a fawn that are actually of an adult deer,” said Bugel. “We want to catch that mistake before it goes into our final database.”

In cases where the set of photos wasn’t classified by the camera host, the team will also perform the initial classification to remove human and blank photos. The Snapshot team wants to make sure any photos that reveal the volunteer’s identity or the location of the camera are removed before those photos continue down the pipeline.

Branching Paths

At this point in the process, photos branch off and go to different locations, depending on what classification they have. Blank (43%) and human (2%) photos are removed from the pipeline at this point. Meanwhile, the wildlife photos (20%) move on to either Zooniverse for consensus classification or move directly to the final dataset. The remaining photos don’t fall into one of our categories, such as the unclassified photos still awaiting initial review.

Photos of difficult-to-classify species, such as wolves and coyotes, are sent to Zooniverse for consensus classification. Bugel explained, “The photos [of challenging species] will always go to Zooniverse, even after volunteer classification and staff member verification, because we’ve learned we need more eyes on those to get the most accurate classification possible,” another layer of quality assurance.

Alternatively, photos with easy-to-classify species, such as deer or squirrel, go directly to the final dataset. Bugel said, “If a photo is classified as a deer or fawn, we trust that the volunteer correctly identified the species.” These photos do not go to Zooniverse.

A deer fawn leaping through

Zooniverse

Photos of difficult-to-classify species or unclassified photos move on to Zooniverse, the crowdsourcing platform, for consensus classification. “Wolf and coyote photos, for example, always go to Zooniverse, because it is so hard to tell the difference, especially in blurry or nighttime photos,” said Bugel.

The Snapshot team has run accuracy analyses for most Wisconsin species to determine which species’ photos need consensus classification. All photos of species with low accuracies go to Zooniverse.

On Zooniverse, volunteers from around the globe classify the wildlife in these photos until a consensus is reached, a process called consensus classification. Individual photos may be classified by up to eleven different volunteers before it is retired, but it could be as few as five if a uniform consensus is reached early. “It all depends on how quickly people agree,” said Bugel.

Team members upload photos to Zooniverse in sets of ten to twenty thousand, and each set is called a season. Bugel explained, “Once all of the photos in that season are retired, we take a few days break to download all of the classifications and add them to our final dataset. Then, a Snapshot team member uploads another set of photos to Zooniverse.” Each set takes roughly two to four weeks to get fully classified on Zooniverse.

To date, over 10,400 people have registered to classify photos on Zooniverse, and around 10% of the total photos have been classified by these volunteers on Zooniverse.

Expert Review

It is also possible for no consensus to be reached, even after eleven classifications. This means that no species received five or more votes out of the eleven possible classifications. These photos are set aside for later expert review.

Expert review was recently implemented by the Snapshot team and is the last step before difficult photos go into the final dataset. The team has to make sure all photos have a concrete classification before they can go into the final dataset, yet some photos never reached a consensus. Team members review these photos again, while looking at the records of how each photo was classified during initial review and on Zooniverse. While there will always be photos that are unidentifiable, expert review by staff helps ensure that every photo is as classified as possible, even the hard ones.

The Final Dataset and Informing Wildlife Management

Our final dataset is the last stop for all photos. This dataset is used by DNR staff to inform wildlife management decisions around the state.

Bugel said, “The biggest management decision support that Snapshot provides right now is fawn-to-doe ratios. Jen [Stenglein] uses Snapshot photo data, along with data from other initiatives, to calculate a ratio of fawns to does each year and that ratio feeds into the deer population model for the state.”

Snapshot has also spotted rare species too, such as a marten in Vilas county and a whooping crane in Jackson county. Snapshot cameras even caught sight of a cougar in Waupaca county, one of only a handful of confirmed sightings in the state.

The final dataset feeds into other Snapshot Wisconsin products, including the Data Dashboard, and helps inform management decisions for certain species like elk. Now that the final dataset has reached a sufficient size, the Snapshot team is expanding its impact by feeding into other decision-making processes at the DNR and developing new products. 

The Snapshot team hopes that this explanation helps clarify some of the questions our volunteers have about what happens to their photos. We know the process can seem complicated at first, and the Snapshot team is happy to answer additional questions about the process. Reach out to them through their email or give them a call at +1 (608) 572 6103.

An infographic showing how photos move from download to final data

Featured Researcher: Hannah Butkiewicz

One of the remarkable elements of the Snapshot Wisconsin program is our ability to work closely with University collaborators across the state. When trail camera hosts upload and classify their photos, they provide valuable data for our program. Research collaborators can then use this data to answer critical questions about our state’s wildlife.

Hannah Butkiewicz is one of Snapshot Wisconsin’s current collaborators. Growing up in rural Wisconsin, Hannah became interested in wildlife during a high school internship. From monitoring Karner Blue butterflies, to tracking wolves, planting prairies, and sampling fresh-water mussels, Hannah describes that summer as “life-changing.” Thanks to the mentorship of one of her teachers, she went on to pursue her interests in wildlife research. Hannah is currently working towards her M.S. at UW-Stevens Point’s Natural Resources program under the supervision of Professor Jason Riddle.

Hannah is investigating three main questions that will provide important information for wildlife management decision support:

  1. What are the estimated ratios of poults (young turkeys) to hens (adult female turkeys), and what is the average brood (group of offspring) size?
  2. Is there a difference in wild turkey reproduction and population growth between habitats that are more than 50% forested or less than 50% forested?
  3. What is the effectiveness of using Snapshot Wisconsin trail camera images to assess wild turkey reproduction and population growth?

Wild turkey and group of young

A hen and her brood of poults captured on a Snapshot Wisconsin trail camera.

In order to answer these questions, Hannah’s research will use two different data sources. First she will analyze wild turkey data that our volunteers have collected from all over the state. These turkey photos are from April through August of 2015-2020. So far, Hannah and her research assistant have reviewed nearly 50,000 of these photo triggers. When they look at each photo, they document the number of hens, poults, toms (adult males), and jakes (juvenile males). This information will help her answer her first research question relating to hen-to-poult ratios and average brood size. It will also help her determine if there is a difference in reproduction and population growth between habitats that are more than 50% forested or less than 50% forested.

The other side of Hannah’s research involves working with a select number of Snapshot Wisconsin volunteers to place additional cameras and sound recording equipment near existing Snapshot trail cameras. A single trail camera is limited in how many animals it can capture because it only detects what passes directly in front of its view. In order to better compare the Snapshot Wisconsin trail camera triggers to the turkeys present outside of the camera’s range, three additional cameras were placed around several deployed Snapshot cameras to form a 360-degree view of the surrounding area (see Figure 1). The automated recording unit will be used to record any turkey calls from individuals that are not within view of the trail cameras, either due to foliage or distance. Hannah plans to check these extra cameras and recording units once a month for the rest of this summer. Having additional trail camera photos and sound recordings of turkeys will help her determine the efficiency of using Snapshot Wisconsin trail camera images for monitoring wild turkey reproduction and population growth. It will also allow her to adjust her hen-to-poult ratio estimates.

Hannah Butkiewicz's Camera Graphic

Figure 1. Created by Hannah Butkiewicz.

When describing her experience as a graduate student working on this project, Hannah said, “The overall experience so far has been great! I am enjoying all my classes and have developed professional relationships with my advisers, graduate students, campus professors and other professionals. Graduate school requires a lot of hard work and dedication, but it sure helps to have a great team!”

Hannah plans to finish her research in August of 2021 in order to have time to write her thesis and graduate by December of next year. We wish Hannah the best of luck in continuing with her graduate studies and we look forward to providing updates on final research findings in the future!

Rare Species Sighting: Cougar

The following piece was a collaboration between Sarah Cameron and Claire Viellieux. A summarized version was published in the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link. 

Another species has joined the list of rarities captured on Snapshot Wisconsin trail cameras, a cougar from Waupaca County. The image was confirmed as containing a cougar by the Wisconsin DNR Wildlife Management team, who have confirmed other rare Snapshot Wisconsin species, including moose, American marten and whooping crane.

A cougar

Confirmed cougar captured on a Waupaca County trail camera.

Cougars (also called mountain lions or pumas) are the largest species of wildcats in North America, with males weighing up to 160 lbs and standing roughly 30 inches tall at shoulder height. Their coats are often yellowish-brown while their belly, inside legs, and chin are white. Another distinguishing characteristic is the black tip at the end of their long tails.

Cougars once roamed the landscapes of Wisconsin and played a key role in the ecosystem as one of the few apex predators, but by 1910, cougar populations had disappeared from the state altogether. While there have been several verified sightings in recent years (with the majority identified as males), there is currently no evidence of a breeding population. Biologists believe that cougars spotted in Wisconsin belong to a breeding population from the Black Hills of South Dakota. That’s over 600 miles these felines have hiked in order to make it to Wisconsin!

This sighting brings Wisconsin’s number of confirmed cougars for this year to a total of three, with the other sightings being reported from trail cameras in Price and Portage Counties. Tom, who initially identified the cougar on this Snapshot Wisconsin trail camera, shared, “[I’m] glad to have been part of it and hope to find something interesting in front of my camera again in the future.”

To learn more about cougars in Wisconsin, check out this episode of the DNR’s Wild Wisconsin podcast.

Whether you are a Zooniverse volunteer or a trail camera host, please let us know if you see a rare species in a Snapshot Wisconsin photo. If you spot them in the wild or on a personal trail camera, report the observation using the Wisconsin large mammal observation form.

Sources:
Cougars in Wisconsin
Wild Wisconsin: Off the Record Podcast Ep. 24

Translating Trail Camera Images into Deer Population Metrics

The following piece was written by project coordinator Christine Anhalt-Depies, Ph.D. for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link

The sight of deer fawns and their mothers along roadsides and in fields may be a sign to some that summer has arrived in Wisconsin. For an ecologist, fawns represent the new “recruits”, or the number of individuals that are added the deer population each year. Understanding the number of fawns on the landscape is an essential part of estimating the size of the deer herd in Wisconsin. Since launch of Snapshot Wisconsin, trail camera photos have played an increasingly important role in this process.

Fawn-to-doe ratios, along with information collected from harvested animals, are the primary way the Wisconsin DNR determines the size of the deer population prior to harvest. Simply put, a fawn-to-doe ratio is the average number of fawns produced per adult doe. This important metric varies across the state and year to year. The number of fawns produced per doe can depend on food availability, winter severity and resource competition, among other factors. For example, cold temperatures and deep snow in a given year can be difficult on the health of does, resulting in fewer fawns come spring. Southern Wisconsin farmland, on the other hand, provides good food sources for deer, and fawn-to-doe ratios are typically higher in these regions compared to northern forested areas.

 

Maps of wisconsin

Number of camera sites contributing to Fawn-to-Doe ratios by year.

 

Traditional surveys used to gather information about fawns and does, such as roadside observations, can have limitations due to factors like weather, topography, or time. Snapshot Wisconsin helps fill critical gaps by contributing additional data and providing improved spatial coverage. Eric Canania, Southern District Deer Biologist with the Wisconsin DNR, explained, “Snapshot’s camera coverage differs from traditional [fawn-to-doe ratio] collection methods by allowing access to observations within the heart of private lands… Although the state of Wisconsin boasts a fair amount of public land, the primary land type is still in private ownership. This means that it’s very important for us to provide [fawn-to-doe ratio] values that come from private and public lands alike and can be collected in various habitat [and] cover types.” In 2019, Snapshot Wisconsin data contributed fawn-to-doe ratios in every single county — the first time this has happened since Snapshot Wisconsin’s launch. In fact, 2019 marks a 50% increase in data collection by Snapshot trail camera volunteers compared to the previous year.

 

Map of Wisconsin

2019 fawn-to-doe ratios estimated from Snapshot Wisconsin photos.

 

To calculate fawn-to-doe ratios, researchers look across all photos at a given camera site during the months of July and August. Having already survived the first few weeks of life in early summer, fawns seen in these months have made it through the riskiest time in their lives. July and August are also ideal for detecting fawns. They are no longer hiding from predators but instead moving around at the heel of their mother. Their characteristic spots also make them easily distinguishable from yearling or adult deer. With the critical help of volunteers, researchers identify and count all photos with does and/or fawns in them from a given camera. They then divide the average number of fawns in these photos by the average number of does. This accounts for the fact that the same doe and fawn(s) may pass in front of the same camera many times throughout the summer months. Averaging all the data from across a county, researchers can report a Snapshot Wisconsin fawn-to-doe ratio.

Fawn-to-doe ratios and population estimates are key metrics provided to Wisconsin’s County Deer Advisory Councils (CDACs). “CDACs are responsible for making deer management recommendations [to the Department] within their individual county,” explained Canania. In this way, “Snapshot provides an awesome opportunity for Wisconsin’s public to become involved and help us produce the most accurate deer management data possible.”

Pondering the Presence of Multiple Species in One Photo

Collecting photos from a fixed location can give us an idea of which animal species are utilizing the same space. Often hours, or even days, elapse between photos of two different species crossing the same area. However, a small percentage of our trail camera photos capture moments of more than one species in the frame together. We recently took inventory of these multi-species instances, and despite our data set growing to over 2.5 million animal triggers, only around 4,300 of them contain multiple species within the same photo.

Figure 1- Known combinations of species appearing in Snapshot Wisconsin photos. Combinations are alphabetical from left to right.

So far, we have confirmed 128 combinations of species appearing in photos together, 119 of which are combinations of two species and 9 of which are combinations of three species. Deer are most commonly one of the animals present in these multi-species occurrences. 37 of these multi-species occurrences are unique combinations of species that have only been observed once in our data set (orange lines in figure 1). A few examples of these include elk with turkey, red fox with opossum, and other bird with porcupine.

Figure 2 – A bobcat is startled by a bird, possibly an American Woodcock

In the future, we hope to perform formal analyses with these data, however there are certain challenges that we must consider. An example of one such consideration for analyzing multi-species trail camera information is detectability. In general, trail cameras have a higher chance of firing if an animal that wanders in its field of view is both large and close to the camera. This might account for the high number of instances of deer and “other birds” occurring together. Birds, especially small birds, may be present at the site many times throughout the day, but may only be captured on camera when a deer, which is generally large enough to reliably trigger the camera, steps into the frame.

Figure 3 – A raccoon and a deer come face-to-face

One observation that can be made from this preliminary analysis is that species that tend to utilize a particular habitat type may be more likely to be pictured together. For example, we have photos of mink and beaver together as well as muskrat and raccoon. These combinations are intuitive because all four species are commonly associated with water.

Figure 4 – A turkey flees a running bobcat

Another observation is that many of the combinations are of two species that do not have a strong predator-prey relationship. For example, deer and turkey are the two species most commonly pictured together, and neither is a predator of the other. Conversely, both bobcats and turkeys are relatively well-represented in the data set, yet we might not expect to see the two species together considering one is a predator and the other might be prey. Indeed, we have only observed one trigger of the two species together.

We hope that Snapshot Wisconsin can continue to shed light on interactions between species such as deer and predators of deer, as this was an early goal of the project. In the meantime, we ponder the many ways in which these space- and time- dependent occurrences are unique.

If you find an interesting interaction between two species on Snapshot Wisconsin, send it to us at DNRSnapshotWisconsin@Wisconsin.gov or share it on our Zooniverse page!