Archive | Research RSS for this section

Individuals Matter Too! – When You Can ID Them

The following piece was written by OAS Communications Coordinator Ryan Bower for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link.

Elk are similar to deer in that they lack identifiable markings most of the time. This makes it hard to know whether an elk in one photo is the same elk that appears in another photo. However, some elk in Wisconsin have uniquely numbered collars, making it possible to identify one individual elk from another.

Using these collars, researchers can piece together all the Snapshot photos of that elk and follow its movement through time. Knowing that the elk in two different photos is the same individual holds a special type of power for researchers and tells them extra information about the size of the elk herd. That is, if the researchers can leverage that additional information.

Glenn Stauffer, Natural Resources Research Scientist within the Office of Applied Science, is leading the initiative to identify individual elk and use these data to improve the annual elk population estimate. Stauffer said, “I was approached because of my quantitative modeling experience to evaluate different ways of using the elk photographs as data to fit an elk model. [Collectively,] the various models I and others have worked on provide a range of options to estimate the [elk] population size and to evaluate how reliable the models are.”

Elk Herd

Identifying Individuals

To better understand the significance of Stauffer’s work, it helps to know how elk have historically been counted in Wisconsin.

“Long before I came onto the scene, the primary way of counting elk was to go out and count them all,” said Stauffer. This method requires extensive time in the field and considerable local knowledge about where elk groups often hang out. Researchers could count some elk by their numbered collars, but they also needed to know how many uncollared elk were in each group. The elk herd grew over the years, and more and more elk did not have identifiable collars. This added another challenge for researchers who were trying to count all the unmarked elk (and make sure they weren’t double counting any of them).

Since the estimate of the elk population size still needed to include an unknown number of these unmarked individuals, the DNR started experimenting with models that didn’t require individual identifications. These new models were also a boon because the herd was reaching too large a size to efficiently collar. It was becoming too much of a time investment and was expensive.

Instead, these models are based on images from the Snapshot camera grid, as discussed in the previous article, but even these camera-based models had room for improvement. Thus, Stauffer began researching a model that incorporated the best of both approaches: a model that was based on the camera data but still incorporated limited individual identification back into the model.

An antlered bull elk with a tracking collar

Stauffer’s Model

Stauffer looked into a variety of models but zeroed in on one type of model in particular. Stauffer explained that this model belongs to a class of models called spatial mark resight models. Spatial mark resight models combine the best of both marked and unmarked models. Stauffer’s model identifies individuals by their collars but also makes inferences from the photos of unmarked elk at the same time.

Spatial mark resight models also relax a major assumption made by the previous camera model, the closure assumption. “This assumption states that the number of elk at a particular camera location doesn’t change from one encounter occasion to the next, and it is clearly violated. Elk are wandering from camera to camera,” said Stauffer. Stauffer’s hybrid model relaxes the closure assumption and attempts to figure out the minimum number of distinct elk it can identify from the pictures.

Collared elk are often easy to identify in the photos. These collared elk are given the ID assigned to their respective collar number so that all photos of a particular elk share the same ID. The model also attempts to assign IDs to uncollared elk in the photos. The model uses probabilistics to assigns IDs to all remaining elk – either uncollared elk or unknown elk (because the collar or the collar number isn’t visible in the photo) – based on characteristics visible in each photo.

Fortunately, Stauffer’s model uses as much information as it can get from the photos when assigning IDs. For example, if one photo is of a calf and another photo is of a cow, then the model won’t assign the same ID to these animals. After all, we know those are two distinct elk, not one. Similarly, a marked but unidentified elk with one collar type can’t be the same as another unidentified elk with a different collar type. The model even uses spatial data to differentiate unmarked elk from two different photos. For example, photos at two locations close together might be from the same elk, but photos from two distant locations probably represent two different elk.

Capitalizing on all the information available in the Snapshot photos, the model makes an estimate of how many elk are likely in Wisconsin’s elk herds. As the elk herds continue to grow, this modeling approach helps estimate the elk population and hopefully saves the DNR time and money.

Bull Elk

How well does the model work?

“[Technically,] the spatial count model doesn’t require any information about individual IDs, but it performs pretty poorly without them,” said Stauffer. “There is a series of papers from about 2013 on that shows if you add information about individuals to spatial counts, you can really improve the accuracy and precision of the spatial model.”

“Theoretically, this makes the model estimates more precise,” said Stauffer. To check, Stauffer collaborated with a colleague to run a bunch of simulations with known, perfect data, and the model worked reasonably well. These simulation results are encouraging because the model wasn’t massively overpredicting or underpredicting the number of elk in the herds, both of which could have management implications for elk.

When asked if identifying individuals from photographs is worth the extra effort, Stauffer said, “Working with models that don’t require individual IDs still requires considerable time to classify photos. Identifying individuals is only a little bit more work on top of that. In general, when you can’t meet the assumptions of a model, then it is worth getting individual identifications, if you can.”

Just how much additional effort should be put into individual IDs? Stauffer believes part of the answer comes from asking what other information can be obtained from the collars. “If we are already putting the collars on those we capture or release, then we might as well get as much out of them as possible, such as through using photographs [like Snapshot does],” said Stauffer.

Incorporating Another Year

After the Snapshot team finishes assembling the 2020 elk dataset, a large dataset comprised of the data from all the Snapshot photos of elk in 2020, Stauffer will run his model using this new dataset and generate an estimate of last year’s final elk population. Stauffer’s estimate will be closely compared to other estimates generated by the previous camera-based models and through collaring efforts alone to see how well each approach performs.

Stauffer took a minute to reflect on his work so far with the elk population estimate. Stauffer said, “The modeling process has been really rewarding, diving into this topic in a depth that I would not have done if I did not have this Snapshot photo dataset to work with. The simulation also went well. It illustrated that the model works the way we claim it works, which is good. Fitting the model to the elk data is mostly encouraging, but it shows that there are situations where it doesn’t do as good of a job as we hoped. Specifically, for calves, it still needs to be fine-tuned.”

From physically counting elk to modeling counts of only unknown individuals to modeling counts of both unknown and known individuals, Wisconsin’s approach to estimating elk abundance has evolved through time. Chances are, as the composition and distribution of the herd changes in the coming years, the approach will evolve even more. But for the next few years, Stauffer’s work will help direct how we count elk now.

Elk Snapshots Mean Better Elk Modeling

The following piece was written by OAS Communications Coordinator Ryan Bower for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link.

At the start of every year, DNR staff begin compiling a large dataset of elk sightings from the previous calendar year, and the data, once compiled, is used to calculate the total number of elk that live in the state. This method has been a standard practice since the second reintroduction of elk to Wisconsin.

What some of you may not know is that Snapshot plays an important role in counting elk by providing sightings, particularly of bulls. In fact, Snapshot has more than 250 of its cameras (over 10% of all Snapshot cameras) dedicated to monitoring elk alone. These cameras are clustered in the three areas of the state with part of the elk herd – Clam Lake, Flambeau River and Black River Falls. These elk cameras are arranged into a grid-like pattern in each area, just like the rest of the Snapshot grid, except that the density of cameras in the elk grid is a lot higher.

A few members of the Snapshot team are among those working on the 2020 elk dataset, so the team decided to focus this newsletter on elk and how they use photos to learn about the elk herd.

An elk cow, bull, and two calves.

The Snapshot team invited Dr. Jennifer Price Tack, Large Carnivore and Elk Research Scientist and fellow scientist within the Office of Applied Science, to add her perspective to this newsletter on why Snapshot’s photos matter for elk.

The task of integrating Snapshot data into the elk model was originally the work of Joe Dittrich, who laid a solid foundation for Price Tack. Since Price Tack joined the Office of Applied Science at the end of 2019, she has been using Snapshot data to model how various quota alternatives will affect the elk herd size in the years to come.

“My research focuses on [elk] populations because populations are the scale at which we manage wildlife,” said Price Tack. Population is the starting point for all decisions that are made about managing wildlife in Wisconsin. The status of a population determines how decisions are made, policy is framed, quotas are set, permits are allocated, and so on… Population is the unit of concern for the DNR.

Price Tack continued, “While individual animals are important and make up a population, our ability to manage them breaks down some at the individual level, [simply] due to the infeasibility of monitoring individuals.”

For species like elk, which normally lack easy-to-identify markings, individual identification is often difficult. Possible, as discussed in the next article, but difficult. Thus, populations tend to be the scale of most species work at the DNR, including Price Tack’s work on elk.

As Price Tack walks us through her research on the elk population, check out the unique way that Snapshot photo data are used to monitor this large herbivore population.

Feeding Photo Data Into The Model

Photos of elk can have multiple forms of data in them, beyond just what animals are present in the picture. There is camera location data, for example, which provides information about which areas of land the elk are using and not using.

There is also movement data. The Snapshot team learned that elk calves, cows and bulls have different movement patterns and are seen at different rates throughout the year. When bulls are the most active, for example, cows tend to be less active.

The camera data also helps Snapshot determine a calf-to-cow ratio for elk. Although, it isn’t as simple as dividing all the calf photos by the number of cow photos. Cows move around more than calves do and are more detectable in photos, given their larger size. Using knowledge about calf/cow visibility, calves and cows are modeled separately, and those numbers are then used to calculate the calf-cow ratio for elk.

“I remember first learning about Snapshot and thinking it is such a cool resource! There is so much you can do with camera data.” said Price Tack. “I have experience working in other systems that use camera data, so I know [firsthand] that using camera data has a lot of benefits” – benefits like providing many forms of data at once and being more cost-effective than extensive collaring. “I wanted to tap in and work with these folks.”

Elk herd walking through the snow

Price Tack mentioned that she even had the Snapshot logo in her interview presentation. She was already thinking about how to get the most out of Snapshot’s camera data.

“Now that I’m here, my focus is on filling research needs to inform decisions,” Price Tack continued. “[Our research] is going to be critical to helping wildlife management and species committees make informed decisions for elk, such as deciding elk harvest quotas in the upcoming years. Snapshot data is one tool we can use to fill those research needs. It is available, and I’d like to use it as much as feasible.”

Besides estimating the population of the elk classes (e.g. calves, cows and bulls), Snapshot data is currently being used to help estimate population parameters and help us understand what is happening with the population. Population parameters are estimates of important characteristics of the population, such as recruitment (birth rate), mortality (death rate) and survival rates of different elk classes within the population.

Price Tack’s model uses matrix algebra to take an initial elk population size and projects the population into the future, using what we know about elk population parameters. In other words, the model can predict how large the elk population is likely to grow in the years to come. There is natural variation however, that can cause some years to be unpredictably good or bad for elk, so the model needs to be updated each year to keep its accuracy as high as possible.

Thanks to Snapshot’s camera data, we have a system in place to calculate each year’s population parameters and continue updating the model each year. This should help us catch if anything of concern happens to the population and (hopefully) fix it before it becomes a threat.

Improving the Elk Model

Another of Price Tack’s tasks related to Snapshot is improving the elk model. Many of the improvements Price Tack is researching aim to address data collection for a larger population.

The elk population was very small when the DNR first reintroduced elk to the state in 1995 and again in 2015. The DNR used intensive monitoring methods back then to collar (and track) every elk in the herd, since intensive methods are best suited for small populations. However,with the elk herd doing so well, it won’t be long before a different approach is needed. The DNR wants to transition to a method more appropriate for a larger elk population.

Currently, the DNR is early in the process of ramping up non-invasive, cost-effective methods like Snapshot monitoring and toning down the collaring effort. Although, this transition will take time, happening over the next few years.

Price Tack also mentioned another modification under consideration. Price Tack and the Snapshot team are looking into repositioning some of the cameras within the elk grid. Currently, the elk grid doesn’t perfectly align with where the elk are congregating. There are a few cameras outside of the elk range that don’t see any elk, and there are edges of the elk’s range that extend beyond where the cameras are deployed. Repositioning the cameras should mean more elk pictures, which means more elk data.

Elk calf

The Frontier of Camera Monitoring

The role of Snapshot in monitoring elk is evolving, and Price Tack and the Snapshot team believe it is for the better. While they can’t guarantee that Snapshot will always play a central role in collecting data on elk, Snapshot will fill this role for the next few years at least.

Price Tack said, “This is the frontier of using camera trap data for elk. Every year, new approaches to using camera trap data are being developed. That has me excited that, even though we don’t have all the answers now, more opportunities may be on the horizon.”

You can also get more elk-related news by signing up for the Elk in Wisconsin topic on GovDelivery. Joining this email list (or others like it, including a GovDelivery topic for Snapshot Wisconsin) is the best way to make sure you don’t miss out on news you are interested in.

What Happens to Photos Once Uploaded?

The following piece was written by OAS Communications Coordinator Ryan Bower for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link.

Since Snapshot reached 50 million photos, the Snapshot team felt it was a good time to address one of the most asked questions about photos: what happens to photos once they are uploaded by volunteers? At first, the process seems complicated, but member of the Snapshot team, Jamie Bugel, is here to walk us through the process, one step at a time.

Bugel is a Natural Resources Educator and Research Technician at the DNR, but she works on the volunteer side of Snapshot. Bugel said, “I mainly help volunteers troubleshoot issues with their equipment or with their interactions with the MySnapshot interface. I am one of the people who answer the Snapshot phone, and I help update the user interface by testing functionality. There is also lots of data management coordination on the volunteer side of the program that I help with.”

Bugel listed off a few of the more common questions she and the rest of the Snapshot team get asked, including who reviews photos after the initial classification, what happens to the photos that camera hosts can’t identify and how do mistakes get rectified. “We get asked those [questions] on a weekly to daily basis,” said Bugel.

It Starts With a Three-Month Check and an Upload

Every three months, trail camera hosts are supposed to swap out the SD card and batteries in their trail camera. At the same time, volunteers fill out a camera check sheet, including what time of day they checked the camera, how many photos were on the SD card and if there was any equipment damage.

“You should wait at least three months to check their camera, because you won’t disturb the wildlife by checking more often. We want to view the wildlife with as minimal human interference as possible,” said Bugel. “At the same time, volunteers should check [their camera] at least every three months, because batteries don’t last much longer than three months. Checking this often is important to avoid missing photos.”

After the volunteer does their three-month check, they bring their camera’s SD card back to their home and enter the information on their camera check sheet into their MySnapshot account and upload their photos.

Bugel said it can take anywhere from 4 to 48 hours for the photos to appear in the volunteer’s MySnapshot account. Fortunately, the server will send an email when the photos are ready, so volunteers don’t have to keep checking. Volunteers can start classifying their photos after receiving the email.

A fisher walking through the snow

Initial Classification By Camera Hosts

The first round of classification is done by the trail camera hosts. The returned photos will sit in the Review Photos section of their MySnapshot account while the host classifies the photos as Human, Blank or Wildlife. The wildlife photos are also further classified by which species are present in the photo, such as beaver, deer or coyote.

This initial classification step is very important for protecting the privacy of our camera hosts, as well as helps on the back end of data processing. Over 90% of all photos are classified at this step by the camera hosts. When they are done classifying photos, they click “review complete,” and the set of photos is sent to the Snapshot team for the second round of classification.

Staff Review

The second round of classification is the staff review. Members of the Snapshot team review sets of photos to verify that all human or blank photos have been properly flagged. “For example, a deer photo may include a deer stand in the background. That type of photo will not go to Zooniverse because there is a human object in the photo,” said Bugel. Fortunately, nearly all human photos are taken during the initial camera setup or while swapping batteries and SD card, so they are usually clumped and easy to spot.

The second reason that staff review photos after the initial classification is for quality assurance. Since some animal species are tricky to correctly classify, someone from the Snapshot team reviews sets to verify that the photos were tagged with the correct species. This quality assurance step helps rectify mistakes. “Sometimes there are photos classified as blank or a fawn that are actually of an adult deer,” said Bugel. “We want to catch that mistake before it goes into our final database.”

In cases where the set of photos wasn’t classified by the camera host, the team will also perform the initial classification to remove human and blank photos. The Snapshot team wants to make sure any photos that reveal the volunteer’s identity or the location of the camera are removed before those photos continue down the pipeline.

Branching Paths

At this point in the process, photos branch off and go to different locations, depending on what classification they have. Blank (43%) and human (2%) photos are removed from the pipeline at this point. Meanwhile, the wildlife photos (20%) move on to either Zooniverse for consensus classification or move directly to the final dataset. The remaining photos don’t fall into one of our categories, such as the unclassified photos still awaiting initial review.

Photos of difficult-to-classify species, such as wolves and coyotes, are sent to Zooniverse for consensus classification. Bugel explained, “The photos [of challenging species] will always go to Zooniverse, even after volunteer classification and staff member verification, because we’ve learned we need more eyes on those to get the most accurate classification possible,” another layer of quality assurance.

Alternatively, photos with easy-to-classify species, such as deer or squirrel, go directly to the final dataset. Bugel said, “If a photo is classified as a deer or fawn, we trust that the volunteer correctly identified the species.” These photos do not go to Zooniverse.

A deer fawn leaping through

Zooniverse

Photos of difficult-to-classify species or unclassified photos move on to Zooniverse, the crowdsourcing platform, for consensus classification. “Wolf and coyote photos, for example, always go to Zooniverse, because it is so hard to tell the difference, especially in blurry or nighttime photos,” said Bugel.

The Snapshot team has run accuracy analyses for most Wisconsin species to determine which species’ photos need consensus classification. All photos of species with low accuracies go to Zooniverse.

On Zooniverse, volunteers from around the globe classify the wildlife in these photos until a consensus is reached, a process called consensus classification. Individual photos may be classified by up to eleven different volunteers before it is retired, but it could be as few as five if a uniform consensus is reached early. “It all depends on how quickly people agree,” said Bugel.

Team members upload photos to Zooniverse in sets of ten to twenty thousand, and each set is called a season. Bugel explained, “Once all of the photos in that season are retired, we take a few days break to download all of the classifications and add them to our final dataset. Then, a Snapshot team member uploads another set of photos to Zooniverse.” Each set takes roughly two to four weeks to get fully classified on Zooniverse.

To date, over 10,400 people have registered to classify photos on Zooniverse, and around 10% of the total photos have been classified by these volunteers on Zooniverse.

Expert Review

It is also possible for no consensus to be reached, even after eleven classifications. This means that no species received five or more votes out of the eleven possible classifications. These photos are set aside for later expert review.

Expert review was recently implemented by the Snapshot team and is the last step before difficult photos go into the final dataset. The team has to make sure all photos have a concrete classification before they can go into the final dataset, yet some photos never reached a consensus. Team members review these photos again, while looking at the records of how each photo was classified during initial review and on Zooniverse. While there will always be photos that are unidentifiable, expert review by staff helps ensure that every photo is as classified as possible, even the hard ones.

The Final Dataset and Informing Wildlife Management

Our final dataset is the last stop for all photos. This dataset is used by DNR staff to inform wildlife management decisions around the state.

Bugel said, “The biggest management decision support that Snapshot provides right now is fawn-to-doe ratios. Jen [Stenglein] uses Snapshot photo data, along with data from other initiatives, to calculate a ratio of fawns to does each year and that ratio feeds into the deer population model for the state.”

Snapshot has also spotted rare species too, such as a marten in Vilas county and a whooping crane in Jackson county. Snapshot cameras even caught sight of a cougar in Waupaca county, one of only a handful of confirmed sightings in the state.

The final dataset feeds into other Snapshot Wisconsin products, including the Data Dashboard, and helps inform management decisions for certain species like elk. Now that the final dataset has reached a sufficient size, the Snapshot team is expanding its impact by feeding into other decision-making processes at the DNR and developing new products. 

The Snapshot team hopes that this explanation helps clarify some of the questions our volunteers have about what happens to their photos. We know the process can seem complicated at first, and the Snapshot team is happy to answer additional questions about the process. Reach out to them through their email or give them a call at +1 (608) 572 6103.

An infographic showing how photos move from download to final data

Featured Researcher: Hannah Butkiewicz

One of the remarkable elements of the Snapshot Wisconsin program is our ability to work closely with University collaborators across the state. When trail camera hosts upload and classify their photos, they provide valuable data for our program. Research collaborators can then use this data to answer critical questions about our state’s wildlife.

Hannah Butkiewicz is one of Snapshot Wisconsin’s current collaborators. Growing up in rural Wisconsin, Hannah became interested in wildlife during a high school internship. From monitoring Karner Blue butterflies, to tracking wolves, planting prairies, and sampling fresh-water mussels, Hannah describes that summer as “life-changing.” Thanks to the mentorship of one of her teachers, she went on to pursue her interests in wildlife research. Hannah is currently working towards her M.S. at UW-Stevens Point’s Natural Resources program under the supervision of Professor Jason Riddle.

Hannah is investigating three main questions that will provide important information for wildlife management decision support:

  1. What are the estimated ratios of poults (young turkeys) to hens (adult female turkeys), and what is the average brood (group of offspring) size?
  2. Is there a difference in wild turkey reproduction and population growth between habitats that are more than 50% forested or less than 50% forested?
  3. What is the effectiveness of using Snapshot Wisconsin trail camera images to assess wild turkey reproduction and population growth?
Wild turkey and group of young

A hen and her brood of poults captured on a Snapshot Wisconsin trail camera.

In order to answer these questions, Hannah’s research will use two different data sources. First she will analyze wild turkey data that our volunteers have collected from all over the state. These turkey photos are from April through August of 2015-2020. So far, Hannah and her research assistant have reviewed nearly 50,000 of these photo triggers. When they look at each photo, they document the number of hens, poults, toms (adult males), and jakes (juvenile males). This information will help her answer her first research question relating to hen-to-poult ratios and average brood size. It will also help her determine if there is a difference in reproduction and population growth between habitats that are more than 50% forested or less than 50% forested.

The other side of Hannah’s research involves working with a select number of Snapshot Wisconsin volunteers to place additional cameras and sound recording equipment near existing Snapshot trail cameras. A single trail camera is limited in how many animals it can capture because it only detects what passes directly in front of its view. In order to better compare the Snapshot Wisconsin trail camera triggers to the turkeys present outside of the camera’s range, three additional cameras were placed around several deployed Snapshot cameras to form a 360-degree view of the surrounding area (see Figure 1). The automated recording unit will be used to record any turkey calls from individuals that are not within view of the trail cameras, either due to foliage or distance. Hannah plans to check these extra cameras and recording units once a month for the rest of this summer. Having additional trail camera photos and sound recordings of turkeys will help her determine the efficiency of using Snapshot Wisconsin trail camera images for monitoring wild turkey reproduction and population growth. It will also allow her to adjust her hen-to-poult ratio estimates.

Hannah Butkiewicz's Camera Graphic

Figure 1. Created by Hannah Butkiewicz.

When describing her experience as a graduate student working on this project, Hannah said, “The overall experience so far has been great! I am enjoying all my classes and have developed professional relationships with my advisers, graduate students, campus professors and other professionals. Graduate school requires a lot of hard work and dedication, but it sure helps to have a great team!”

Hannah plans to finish her research in August of 2021 in order to have time to write her thesis and graduate by December of next year. We wish Hannah the best of luck in continuing with her graduate studies and we look forward to providing updates on final research findings in the future!

Rare Species Sighting: Cougar

The following piece was a collaboration between Sarah Cameron and Claire Viellieux. A summarized version was published in the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link. 

Another species has joined the list of rarities captured on Snapshot Wisconsin trail cameras, a cougar from Waupaca County. The image was confirmed as containing a cougar by the Wisconsin DNR Wildlife Management team, who have confirmed other rare Snapshot Wisconsin species, including moose, American marten and whooping crane.

A cougar

Confirmed cougar captured on a Waupaca County trail camera.

Cougars (also called mountain lions or pumas) are the largest species of wildcats in North America, with males weighing up to 160 lbs and standing roughly 30 inches tall at shoulder height. Their coats are often yellowish-brown while their belly, inside legs, and chin are white. Another distinguishing characteristic is the black tip at the end of their long tails.

Cougars once roamed the landscapes of Wisconsin and played a key role in the ecosystem as one of the few apex predators, but by 1910, cougar populations had disappeared from the state altogether. While there have been several verified sightings in recent years (with the majority identified as males), there is currently no evidence of a breeding population. Biologists believe that cougars spotted in Wisconsin belong to a breeding population from the Black Hills of South Dakota. That’s over 600 miles these felines have hiked in order to make it to Wisconsin!

This sighting brings Wisconsin’s number of confirmed cougars for this year to a total of three, with the other sightings being reported from trail cameras in Price and Portage Counties. Tom, who initially identified the cougar on this Snapshot Wisconsin trail camera, shared, “[I’m] glad to have been part of it and hope to find something interesting in front of my camera again in the future.”

To learn more about cougars in Wisconsin, check out this episode of the DNR’s Wild Wisconsin podcast.

Whether you are a Zooniverse volunteer or a trail camera host, please let us know if you see a rare species in a Snapshot Wisconsin photo. If you spot them in the wild or on a personal trail camera, report the observation using the Wisconsin large mammal observation form.

Sources:
Cougars in Wisconsin
Wild Wisconsin: Off the Record Podcast Ep. 24

Translating Trail Camera Images into Deer Population Metrics

The following piece was written by project coordinator Christine Anhalt-Depies, Ph.D. for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link

The sight of deer fawns and their mothers along roadsides and in fields may be a sign to some that summer has arrived in Wisconsin. For an ecologist, fawns represent the new “recruits”, or the number of individuals that are added the deer population each year. Understanding the number of fawns on the landscape is an essential part of estimating the size of the deer herd in Wisconsin. Since launch of Snapshot Wisconsin, trail camera photos have played an increasingly important role in this process.

Fawn-to-doe ratios, along with information collected from harvested animals, are the primary way the Wisconsin DNR determines the size of the deer population prior to harvest. Simply put, a fawn-to-doe ratio is the average number of fawns produced per adult doe. This important metric varies across the state and year to year. The number of fawns produced per doe can depend on food availability, winter severity and resource competition, among other factors. For example, cold temperatures and deep snow in a given year can be difficult on the health of does, resulting in fewer fawns come spring. Southern Wisconsin farmland, on the other hand, provides good food sources for deer, and fawn-to-doe ratios are typically higher in these regions compared to northern forested areas.

 

Maps of wisconsin

Number of camera sites contributing to Fawn-to-Doe ratios by year.

 

Traditional surveys used to gather information about fawns and does, such as roadside observations, can have limitations due to factors like weather, topography, or time. Snapshot Wisconsin helps fill critical gaps by contributing additional data and providing improved spatial coverage. Eric Canania, Southern District Deer Biologist with the Wisconsin DNR, explained, “Snapshot’s camera coverage differs from traditional [fawn-to-doe ratio] collection methods by allowing access to observations within the heart of private lands… Although the state of Wisconsin boasts a fair amount of public land, the primary land type is still in private ownership. This means that it’s very important for us to provide [fawn-to-doe ratio] values that come from private and public lands alike and can be collected in various habitat [and] cover types.” In 2019, Snapshot Wisconsin data contributed fawn-to-doe ratios in every single county — the first time this has happened since Snapshot Wisconsin’s launch. In fact, 2019 marks a 50% increase in data collection by Snapshot trail camera volunteers compared to the previous year.

 

Map of Wisconsin

2019 fawn-to-doe ratios estimated from Snapshot Wisconsin photos.

 

To calculate fawn-to-doe ratios, researchers look across all photos at a given camera site during the months of July and August. Having already survived the first few weeks of life in early summer, fawns seen in these months have made it through the riskiest time in their lives. July and August are also ideal for detecting fawns. They are no longer hiding from predators but instead moving around at the heel of their mother. Their characteristic spots also make them easily distinguishable from yearling or adult deer. With the critical help of volunteers, researchers identify and count all photos with does and/or fawns in them from a given camera. They then divide the average number of fawns in these photos by the average number of does. This accounts for the fact that the same doe and fawn(s) may pass in front of the same camera many times throughout the summer months. Averaging all the data from across a county, researchers can report a Snapshot Wisconsin fawn-to-doe ratio.

Fawn-to-doe ratios and population estimates are key metrics provided to Wisconsin’s County Deer Advisory Councils (CDACs). “CDACs are responsible for making deer management recommendations [to the Department] within their individual county,” explained Canania. In this way, “Snapshot provides an awesome opportunity for Wisconsin’s public to become involved and help us produce the most accurate deer management data possible.”

Pondering the Presence of Multiple Species in One Photo

Collecting photos from a fixed location can give us an idea of which animal species are utilizing the same space. Often hours, or even days, elapse between photos of two different species crossing the same area. However, a small percentage of our trail camera photos capture moments of more than one species in the frame together. We recently took inventory of these multi-species instances, and despite our data set growing to over 2.5 million animal triggers, only around 4,300 of them contain multiple species within the same photo.

Figure 1- Known combinations of species appearing in Snapshot Wisconsin photos. Combinations are alphabetical from left to right.

So far, we have confirmed 128 combinations of species appearing in photos together, 119 of which are combinations of two species and 9 of which are combinations of three species. Deer are most commonly one of the animals present in these multi-species occurrences. 37 of these multi-species occurrences are unique combinations of species that have only been observed once in our data set (orange lines in figure 1). A few examples of these include elk with turkey, red fox with opossum, and other bird with porcupine.

Figure 2 – A bobcat is startled by a bird, possibly an American Woodcock

In the future, we hope to perform formal analyses with these data, however there are certain challenges that we must consider. An example of one such consideration for analyzing multi-species trail camera information is detectability. In general, trail cameras have a higher chance of firing if an animal that wanders in its field of view is both large and close to the camera. This might account for the high number of instances of deer and “other birds” occurring together. Birds, especially small birds, may be present at the site many times throughout the day, but may only be captured on camera when a deer, which is generally large enough to reliably trigger the camera, steps into the frame.

Figure 3 – A raccoon and a deer come face-to-face

One observation that can be made from this preliminary analysis is that species that tend to utilize a particular habitat type may be more likely to be pictured together. For example, we have photos of mink and beaver together as well as muskrat and raccoon. These combinations are intuitive because all four species are commonly associated with water.

Figure 4 – A turkey flees a running bobcat

Another observation is that many of the combinations are of two species that do not have a strong predator-prey relationship. For example, deer and turkey are the two species most commonly pictured together, and neither is a predator of the other. Conversely, both bobcats and turkeys are relatively well-represented in the data set, yet we might not expect to see the two species together considering one is a predator and the other might be prey. Indeed, we have only observed one trigger of the two species together.

We hope that Snapshot Wisconsin can continue to shed light on interactions between species such as deer and predators of deer, as this was an early goal of the project. In the meantime, we ponder the many ways in which these space- and time- dependent occurrences are unique.

If you find an interesting interaction between two species on Snapshot Wisconsin, send it to us at DNRSnapshotWisconsin@Wisconsin.gov or share it on our Zooniverse page!

 

Maps of the Zooniverse

The following piece was written by OAS Communications Coordinator AnnaKathryn Kruger for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link

The opportunity to classify photos of wildlife from across Wisconsin draws a diverse array of individuals to our Zooniverse page. Some volunteers are trail camera hosts themselves and enjoy classifying photos from other camera sites. Zooniverse also offers this opportunity to those who are unable to host a camera but still wish to participate in the project.

The maps here were created using Google Analytics data, which can anonymously record information about users who access a webpage, such as their nearest city. This data shows us that Snapshot Wisconsin reaches an audience far beyond Wisconsin, and even beyond the United States! In total, volunteers from 696 cities across 41 countries have interacted with the Snapshot Wisconsin Zooniverse page since 2016. 190 of those cities are in Wisconsin.

Each dot represents just one city, regardless of the number of individuals who accessed the site in that location. For example, the dot for the city of Madison could represent thousands of users. Zooming in on Wisconsin, we see that many dots are centered around the most populous areas, such as Madison, Milwaukee, Minneapolis and Chicago. This pattern can be attributed to the fact that these areas also host the highest concentration of suburbs.

Regardless of the volunteer’s location, each classification we receive is important to the success of Snapshot Wisconsin.

Wisconsin Map

World Map

Evaluating Project Participation Through Zooniverse

The following piece was written by OAS Communications Coordinator AnnaKathryn Kruger for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link

One of the easiest ways to participate in Snapshot Wisconsin is by classifying photos through a website called Zooniverse. Zooniverse is a crowdsourcing service that is accessible to anyone, anywhere, and the site has hosted Snapshot Wisconsin since 2016. Snapshot Wisconsin’s most prolific Zooniverse volunteer has contributed over 65,000 classifications to the project’s dataset. To date, 1.9 million trail camera photos have been processed through Zooniverse, and more than 7,500 different individuals have registered to participate.

Zooniverse volunteers play a pivotal role in Snapshot Wisconsin. Analyzing volunteer participation gives staff a better idea of how to effectively engage volunteers and can also offer researchers a look at how patterns in participation relate to the overall quality of the data acquired from the platform.

In the interest of exploring a quantitative assessment of volunteer participation in Snapshot Wisconsin through Zooniverse, researchers conducted a Latent Profile Analysis (LPA) of our volunteers. LPA can be used to organize a given sample size of people into groups based on observable variables, such as user activity over time. Through this, researchers were able to ascertain how many different groups of people exist in the sample, which individuals belong to which group, and what characteristics are unique to each group. This allowed researchers to hone in on specific patterns in user engagement.

Researchers identified measurable variables unique to each volunteer and their activity on Zooniverse between November 2017 and February 2019. These included the number of days each volunteer was active, time elapsed between active days, and the amount of time volunteers spent on the site on active days. From this, researchers parsed volunteers into three profiles: temporary, intermittent and persistent.

Volunteer Groups

Profiles of Snapshot Wisconsin volunteer participation on Zooniverse

Temporary volunteers are those who exhibited rigorous participation, but only for a short period of time. Intermittent are those characterized by the significant amount of time elapsed between a relatively small number of active days. Persistent are those who demonstrated high levels of activity across the entire period examined.

Measures of accuracy specific to each group revealed that temporary volunteers demonstrate lower accuracy in their classifications compared to intermittent volunteers. Though intermittent volunteers tended to allow more time to go by between active days, the consistent practice ultimately made their classifications more accurate.

In this instance, we may turn to an old adage: practice makes perfect. It comes as no surprise that practice and accuracy are correlated, and that volunteers become better at classifying photos with more time spent doing so. In the graphic on the right, all four photos are of porcupines, though they are of varying degrees of difficulty when it comes to classification. Though classifying photos like these may be tricky at first, over time certain characteristics begin to stand out more readily – a porcupine may be identified by their lumbering gait, or the way that their quills appear from different angles and in different light. The more frequently one sees these traits, the easier they become to identify. Volunteers who participate at any level, whether temporary, intermittent, or persistent, are of great value to the project, and the more time spent on Zooniverse, the more likely that the classifications assigned to each photo are accurate.

Porcupine

Citizen science is an integral part of the Snapshot Wisconsin project, and is in fact core to its mission, which is to rally the knowledge and resources of citizens across Wisconsin and throughout the world to build a comprehensive and highly accurate portrait of Wisconsin wildlife. No two Zooniverse volunteers are quite the same, and each individual informs our understanding of how citizen science can be utilized effectively in research. No matter how one chooses to participate, participation alone brings us closer to our goal.

Wily Weasels: The Math Behind Mustelid Identification

The following piece was written by OAS Communications Coordinator AnnaKathryn Kruger for the Snapshot Wisconsin newsletter. To subscribe to the newsletter, visit this link

Weasels are the most commonly misidentified animal in the entire Snapshot Wisconsin dataset. What might make them so tricky to identify, and just how do they differ from the other members of their family for whom they are often mistaken, like marten or mink?

As far as their phylogenic standing, weasels belong to the superfamily Musteloidea. Contained within Musteloidea are the families Mephitidae, which includes skunks; Mustelidae, including weasels, otters, ferrets and kin; and Procyonidae, with raccoons and their neotropical brethren. In examining the dataset of Snapshot Wisconsin photos that have received an expert classification, researchers have determined that weasels and mink are the two most difficult species for volunteers to classify.

Phylogenetic_Tree

The two avenues for classification available to volunteers through Snapshot Wisconsin are MySnapshot and Zooniverse. MySnapshot is the outlet available to those who monitor trail cameras, where they can classify the animals in the photos captured on their own camera. Zooniverse is a public forum where volunteers classify photos that are served up at random from Snapshot Wisconsin cameras across the state. Photos are captured in sets of three, called “triggers”, and volunteers classify the entire set at once.

recall_precision4When evaluating accuracy in classification, researchers focus on two variables: recall and precision. Both variables provide measures of accuracy for a group of volunteer classifiers, either from MySnapshot or Zooniverse, compared to expert classification. Recall addresses the question: out of all the weasel triggers in our dataset, how many did volunteers classify as weasels? Whereas precision addresses the question: how many triggers classified as weasels by volunteers were actually weasels?

Between both MySnapshot and Zooniverse, volunteers generally demonstrate high recall and precision when classifying animals that belong to the whole superfamily Musteloidea. When it comes to classifying individual species, we can see that animals like skunks, otters and raccoons are easier to classify correctly on account of their distinctive traits, but weasels are quite similar in physical appearance to the species with whom they share a family, namely mink. This makes weasels particularly easy to misidentify.

Triggers containing weasels and mink are most often missed completely on Zooniverse, with a recall value for these species of 41%. Out of an expertly classified sample size of 15 weasels, only 10 of the 15 were identified correctly by volunteers and 4 additional triggers classified as weasels on Zooniverse were not weasels. This puts recall and precision for weasel classification at a measly 67% and 71%, respectively.

For mink, Zooniverse and MySnapshot share low recall, with approximately 65% of mink photos missed completely on Zooniverse, and 39% missed on MySnapshot. However, the triggers that were classified were largely classified correctly, with perfect precision on Zooniverse and 87% precision on MySnapshot.

So how can we successfully identify a weasel versus some other mustelid, and vice versa? There are three types of weasels in Wisconsin. The long-tailed weasel is the largest of the three. They are typically 13-18 inches in length with a 4-6 inch black-tipped tail. Their coats are brown and their bellies and throats are cream-colored, though they transition completely to white in the winter. The short-tailed weasel is Wisconsin’s most common weasel. Smaller than the long-tailed weasel, the two share their coloring, which makes them more difficult to differentiate. The only discernible difference is the tail length. The third type of weasel is the least weasel, aptly named as it is the smallest of the three at roughly 6 inches. Though this weasel has coloring similar to the others, the least weasel has a short tail without the black tip.

Weasels look rather similar to mink, though mink are dark-colored and larger than weasels with long tails and glossy coats. They weigh between 1.5-2 lbs. Another mustelid closely resembling the weasel is the American pine marten, an endangered furbearer with a penchant for climbing. They have large rounded ears and a bushy tail, and their fur varies in shades of brown from almost yellow to almost black. Snapshot Wisconsin has only one confirmed photo of a marten, as the species is incredibly rare in Wisconsin.

Badgers seem like a no-brainer, with their characteristic striped heads and wide bodies. They are significantly larger than weasels and have long claws well-suited to digging. Despite their distinctive appearance, badgers are subject to misidentification as well. 22% of the triggers classified as badgers on MySnapshot were not badgers. The same goes for the fisher, another sizeable mustelid weighing in at an average of 15 pounds, with dark brown fur and a bushy tail.

This slideshow requires JavaScript.

Classification is a tricky business, especially when it comes down to mustelids. Snapshot Wisconsin relies on thousands of volunteers to classify the nearly 34 million photos in our dataset, which they generally do with tremendous success. Though the weasel is a trickster, their phylogenic camouflage can be discerned with a trained eye – the same can be said for their Mustelidae cousins. Each accurately classified photo, mustelid or no, brings Snapshot Wisconsin closer to a complete representation of Wisconsin wildlife, and better informs our management of these species.