BPG is committed to discovery and dissemination of knowledge
Cited by in F6Publishing
For: Sainburg T, Thielk M, Gentner TQ. Finding, visualizing, and quantifying latent structure across diverse animal vocal repertoires. PLoS Comput Biol 2020;16:e1008228. [PMID: 33057332 DOI: 10.1371/journal.pcbi.1008228] [Cited by in Crossref: 47] [Cited by in F6Publishing: 50] [Article Influence: 15.7] [Reference Citation Analysis]
Number Citing Articles
1 Berthet M, Coye C, Dezecache G, Kuhn J. Animal linguistics: a primer. Biol Rev Camb Philos Soc 2023;98:81-98. [PMID: 36189714 DOI: 10.1111/brv.12897] [Reference Citation Analysis]
2 Lorenz C, Hao X, Tomka T, Rüttimann L, Hahnloser RH. Interactive extraction of diverse vocal units from a planar embedding without the need for prior sound segmentation. Front Bioinform 2023;2. [DOI: 10.3389/fbinf.2022.966066] [Reference Citation Analysis]
3 Mcginn K, Kahl S, Peery MZ, Klinck H, Wood CM. Feature embeddings from the BirdNET algorithm provide insights into avian ecology. Ecological Informatics 2023. [DOI: 10.1016/j.ecoinf.2023.101995] [Reference Citation Analysis]
4 Pranic NM, Kornbrek C, Yang C, Cleland TA, Tschida KA. Rates of ultrasonic vocalizations are more strongly related than acoustic features to non-vocal behaviors in mouse pups. Front Behav Neurosci 2022;16:1015484. [PMID: 36600992 DOI: 10.3389/fnbeh.2022.1015484] [Reference Citation Analysis]
5 Provost KL, Yang J, Carstens BC. The impacts of fine-tuning, phylogenetic distance, and sample size on big-data bioacoustics. PLoS One 2022;17:e0278522. [PMID: 36477744 DOI: 10.1371/journal.pone.0278522] [Reference Citation Analysis]
6 Michaud F, Sueur J, Le Cesne M, Haupert S. Unsupervised classification to improve the quality of a bird song recording dataset. Ecological Informatics 2022. [DOI: 10.1016/j.ecoinf.2022.101952] [Reference Citation Analysis]
7 Morales G, Vargas V, Espejo D, Poblete V, Tomasevic JA, Otondo F, Navedo JG. Method for passive acoustic monitoring of bird communities using UMAP and a deep neural network. Ecological Informatics 2022. [DOI: 10.1016/j.ecoinf.2022.101909] [Reference Citation Analysis]
8 Lorenz C, Hao X, Tomka T, Rüttimann L, Hahnloser RH. Extracting extended vocal units from two neighborhoods in the embedding plane.. [DOI: 10.1101/2022.09.26.509501] [Reference Citation Analysis]
9 Wang B, Torok Z, Duffy A, Bell D, Wongso S, Velho T, Fairhall A, Lois C. Unsupervised Restoration of a Complex Learned Behavior After Large-Scale Neuronal Perturbation.. [DOI: 10.1101/2022.09.09.507372] [Reference Citation Analysis]
10 Xing J, Sainburg T, Taylor H, Gentner TQ. Syntactic modulation of rhythm in Australian pied butcherbird song. R Soc Open Sci 2022;9:220704. [PMID: 36177196 DOI: 10.1098/rsos.220704] [Cited by in Crossref: 1] [Article Influence: 1.0] [Reference Citation Analysis]
11 Sun Y, Yen S, Lin T. soundscape_IR: A source separation toolbox for exploring acoustic diversity in soundscapes. Methods Ecol Evol. [DOI: 10.1111/2041-210x.13960] [Reference Citation Analysis]
12 Schneider S, Hammerschmidt K, Dierkes PW. Introducing the Software CASE (Cluster and Analyze Sound Events) by Comparing Different Clustering Methods and Audio Transformation Techniques Using Animal Vocalizations. Animals 2022;12:2020. [DOI: 10.3390/ani12162020] [Reference Citation Analysis]
13 Pranic NM, Kornbrek C, Yang C, Cleland TA, Tschida KA. Rates but not acoustic features of ultrasonic vocalizations are related to non-vocal behaviors in mouse pups.. [DOI: 10.1101/2022.08.05.503007] [Reference Citation Analysis]
14 Comella I, Tasirin JS, Klinck H, Johnson LM, Clink DJ. Investigating note repertoires and acoustic tradeoffs in the duet contributions of a basal haplorrhine primate. Front Ecol Evol 2022;10:910121. [DOI: 10.3389/fevo.2022.910121] [Reference Citation Analysis]
15 Karigo T. Gaining insights into the internal states of the rodent brain through vocal communications. Neurosci Res 2022:S0168-0102(22)00211-5. [PMID: 35908736 DOI: 10.1016/j.neures.2022.07.008] [Reference Citation Analysis]
16 Harvill J, Wani Y, Alam M, Ahuja N, Hasegawa-johnsor M, Chestek D, Beiser DG. Estimation of Respiratory Rate from Breathing Audio. 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) 2022. [DOI: 10.1109/embc48229.2022.9871897] [Reference Citation Analysis]
17 Arnaud V, Pellegrino F, Keenan S, St-gelais X, Mathevon N, Levréro F, Coupé C. Improving the workflow to crack Small, Unbalanced, Noisy, but Genuine (SUNG) datasets in bioacoustics: the case of bonobo calls.. [DOI: 10.1101/2022.06.26.497684] [Reference Citation Analysis]
18 Lowe MX, Mohsenzadeh Y, Lahner B, Charest I, Oliva A, Teng S. Cochlea to categories: The spatiotemporal dynamics of semantic auditory representations. Cogn Neuropsychol 2022;:1-22. [PMID: 35729704 DOI: 10.1080/02643294.2022.2085085] [Reference Citation Analysis]
19 Bhuyan AK, Dutta H, Biswas S. Unsupervised Quasi-Silence based Speech Segmentation for Speaker Diarization. 2022 IEEE 9th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT) 2022. [DOI: 10.1109/setit54465.2022.9875932] [Reference Citation Analysis]
20 Miller CT, Gire D, Hoke K, Huk AC, Kelley D, Leopold DA, Smear MC, Theunissen F, Yartsev M, Niell CM. Natural behavior is the language of the brain. Curr Biol 2022;32:R482-93. [PMID: 35609550 DOI: 10.1016/j.cub.2022.03.031] [Cited by in Crossref: 11] [Cited by in F6Publishing: 12] [Article Influence: 11.0] [Reference Citation Analysis]
21 Grazioli J, Ghiggi G, Billault-Roux AC, Berne A. MASCDB, a database of images, descriptors and microphysical properties of individual snowflakes in free fall. Sci Data 2022;9:186. [PMID: 35504919 DOI: 10.1038/s41597-022-01269-7] [Reference Citation Analysis]
22 Sainburg T, Mcpherson TS, Arneodo EM, Rudraraju S, Turvey M, Thielman B, Marcos PT, Thielk M, Gentner TQ. Context-dependent sensory modulation underlies Bayesian vocal sequence perception.. [DOI: 10.1101/2022.04.14.488412] [Reference Citation Analysis]
23 Valente D, Miaretsoa L, Anania A, Costa F, Mascaro A, Raimondi T, De Gregorio C, Torti V, Friard O, Ratsimbazafy J, Giacoma C, Gamba M. Comparative Analysis of the Vocal Repertoires of the Indri (Indri indri) and the Diademed Sifaka (Propithecus diadema). Int J Primatol. [DOI: 10.1007/s10764-022-00287-x] [Cited by in Crossref: 3] [Cited by in F6Publishing: 2] [Article Influence: 3.0] [Reference Citation Analysis]
24 Zhu Y, Smith A, Hauser K. Automated Heart and Lung Auscultation in Robotic Physical Examinations. IEEE Robot Autom Lett 2022;7:4204-11. [DOI: 10.1109/lra.2022.3149576] [Reference Citation Analysis]
25 Linhart P, Mahamoud-issa M, Stowell D, Blumstein DT. The potential for acoustic individual identification in mammals. Mamm Biol. [DOI: 10.1007/s42991-021-00222-2] [Cited by in Crossref: 2] [Cited by in F6Publishing: 1] [Article Influence: 2.0] [Reference Citation Analysis]
26 Adret P. Developmental Plasticity in Primate Coordinated Song: Parallels and Divergences With Duetting Songbirds. Front Ecol Evol 2022;10:862196. [DOI: 10.3389/fevo.2022.862196] [Reference Citation Analysis]
27 Parsons MJG, Lin T, Mooney TA, Erbe C, Juanes F, Lammers M, Li S, Linke S, Looby A, Nedelec SL, Van Opzeeland I, Radford C, Rice AN, Sayigh L, Stanley J, Urban E, Di Iorio L. Sounding the Call for a Global Library of Underwater Biological Sounds. Front Ecol Evol 2022;10:810156. [DOI: 10.3389/fevo.2022.810156] [Cited by in Crossref: 6] [Cited by in F6Publishing: 7] [Article Influence: 6.0] [Reference Citation Analysis]
28 Sainburg T, Gentner TQ. Toward a Computational Neuroethology of Vocal Communication: From Bioacoustics to Neurophysiology, Emerging Tools and Future Directions. Front Behav Neurosci 2021;15:811737. [PMID: 34987365 DOI: 10.3389/fnbeh.2021.811737] [Cited by in Crossref: 2] [Cited by in F6Publishing: 2] [Article Influence: 2.0] [Reference Citation Analysis]
29 Nguyen HT, Thanh TNL, Ngoc TL, Le AD, Tran DT. Evaluation on Noise Reduction in Subtitle Generator for Videos. Innovative Mobile and Internet Services in Ubiquitous Computing 2022. [DOI: 10.1007/978-3-031-08819-3_14] [Reference Citation Analysis]
30 Mathew J, Sahu P, Singhal B, Joshi A, Medikonda KR, Sathyanarayana J. A Multi-modal Approach to Mining Intent from Code-Mixed Hindi-English Calls in the Hyperlocal-Delivery Domain. Speech and Computer 2022. [DOI: 10.1007/978-3-031-20980-2_41] [Reference Citation Analysis]
31 Ortega SVM, Sarria-paja M. Bird Identification from the Thamnophilidae Family at the Andean Region of Colombia. Computer Information Systems and Industrial Management 2022. [DOI: 10.1007/978-3-031-10539-5_18] [Reference Citation Analysis]
32 Sakata JT, Birdsong D. Vocal Learning and Behaviors in Birds and Human Bilinguals: Parallels, Divergences and Directions for Research. Languages 2022;7:5. [DOI: 10.3390/languages7010005] [Reference Citation Analysis]
33 Morita T, Koda H, Okanoya K, Tachibana RO. Measuring context dependency in birdsong using artificial neural networks. PLoS Comput Biol 2021;17:e1009707. [PMID: 34962915 DOI: 10.1371/journal.pcbi.1009707] [Cited by in Crossref: 1] [Cited by in F6Publishing: 1] [Article Influence: 0.5] [Reference Citation Analysis]
34 Ortega MR, Spisak N, Mora T, Walczak AM. Modeling and predicting the overlap of B- and T-cell receptor repertoires in healthy and SARS-CoV-2 infected individuals.. [DOI: 10.1101/2021.12.17.473105] [Cited by in Crossref: 1] [Cited by in F6Publishing: 2] [Article Influence: 0.5] [Reference Citation Analysis]
35 Thomas M, Jensen FH, Averly B, Demartsev V, Manser MB, Sainburg T, Roch MA, Strandburg-peshkin A. A practical guide for generating unsupervised, spectrogram-based latent space representations of animal vocalizations.. [DOI: 10.1101/2021.12.16.472881] [Reference Citation Analysis]
36 Neethirajan S. Is Seeing Still Believing? Leveraging Deepfake Technology for Livestock Farming. Front Vet Sci 2021;8:740253. [PMID: 34888374 DOI: 10.3389/fvets.2021.740253] [Cited by in Crossref: 1] [Cited by in F6Publishing: 2] [Article Influence: 0.5] [Reference Citation Analysis]
37 Bazin A, Medigue C, Vallenet D, Calteau A. panModule: detecting conserved modules in the variable regions of a pangenome graph.. [DOI: 10.1101/2021.12.06.471380] [Reference Citation Analysis]
38 Česonis J, Franklin DW. Task-dependent switching of feedback controllers.. [DOI: 10.1101/2021.12.06.471371] [Reference Citation Analysis]
39 Gressani O, Wallinga J, Althaus C, Hens N, Faes C. EpiLPS: a fast and flexible Bayesian tool for near real-time estimation of the time-varying reproduction number.. [DOI: 10.1101/2021.12.02.21267189] [Cited by in Crossref: 1] [Cited by in F6Publishing: 1] [Article Influence: 0.5] [Reference Citation Analysis]
40 Zhong W, Min X, Zheng Z, Sheng Z, Zhou W. Noise Analysis Based on Frequency Domain for Vehicle Voice Denoising. 2021 IEEE International Conference on Emergency Science and Information Technology (ICESIT) 2021. [DOI: 10.1109/icesit53460.2021.9696798] [Reference Citation Analysis]
41 Steinfath E, Palacios-Muñoz A, Rottschäfer JR, Yuezak D, Clemens J. Fast and accurate annotation of acoustic signals with deep neural networks. Elife 2021;10:e68837. [PMID: 34723794 DOI: 10.7554/eLife.68837] [Cited by in Crossref: 7] [Cited by in F6Publishing: 7] [Article Influence: 3.5] [Reference Citation Analysis]
42 Singh Alvarado J, Goffinet J, Michael V, Liberti W 3rd, Hatfield J, Gardner T, Pearson J, Mooney R. Neural dynamics underlying birdsong practice and performance. Nature 2021;599:635-9. [PMID: 34671166 DOI: 10.1038/s41586-021-04004-1] [Cited by in Crossref: 4] [Cited by in F6Publishing: 5] [Article Influence: 2.0] [Reference Citation Analysis]
43 Wick RR, Holt KE. Polypolish: short-read polishing of long-read bacterial genome assemblies.. [DOI: 10.1101/2021.10.14.464465] [Cited by in Crossref: 2] [Cited by in F6Publishing: 3] [Article Influence: 1.0] [Reference Citation Analysis]
44 Lu S, Steadman M, Ang GWY, Kozlov AS. Composite receptive fields in the mouse auditory cortex.. [DOI: 10.1101/2021.10.13.464267] [Reference Citation Analysis]
45 Chen C, Lin T, Watanabe HK, Akamatsu T, Kawagucci S. Baseline soundscapes of deep‐sea habitats reveal heterogeneity among ecosystems and sensitivity to anthropogenic impacts. Limnology & Oceanography 2021;66:3714-27. [DOI: 10.1002/lno.11911] [Cited by in Crossref: 1] [Cited by in F6Publishing: 1] [Article Influence: 0.5] [Reference Citation Analysis]
46 Liu Z, Proctor L, Collier PN, Zhao X. Automatic Diagnosis and Prediction of Cognitive Decline Associated with Alzheimer’s Dementia through Spontaneous Speech. 2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA) 2021. [DOI: 10.1109/icsipa52582.2021.9576784] [Cited by in Crossref: 1] [Cited by in F6Publishing: 1] [Article Influence: 0.5] [Reference Citation Analysis]
47 Goffinet J, Brudner S, Mooney R, Pearson J. Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires. Elife 2021;10:e67855. [PMID: 33988503 DOI: 10.7554/eLife.67855] [Cited by in Crossref: 18] [Cited by in F6Publishing: 21] [Article Influence: 9.0] [Reference Citation Analysis]
48 Steinfath E, Palacios A, Rottschäfer JR, Yuezak D, Clemens J. Fast and accurate annotation of acoustic signals with deep neural networks.. [DOI: 10.1101/2021.03.26.436927] [Cited by in Crossref: 1] [Cited by in F6Publishing: 1] [Article Influence: 0.5] [Reference Citation Analysis]
49 Jung DH, Kim NY, Moon SH, Jhin C, Kim HJ, Yang JS, Kim HS, Lee TS, Lee JY, Park SH. Deep Learning-Based Cattle Vocal Classification Model and Real-Time Livestock Monitoring System with Noise Filtering. Animals (Basel) 2021;11:357. [PMID: 33535390 DOI: 10.3390/ani11020357] [Cited by in Crossref: 20] [Cited by in F6Publishing: 20] [Article Influence: 10.0] [Reference Citation Analysis]
50 Sainburg T, Thielk M, Gentner TQ. Finding, visualizing, and quantifying latent structure across diverse animal vocal repertoires. PLoS Comput Biol 2020;16:e1008228. [PMID: 33057332 DOI: 10.1371/journal.pcbi.1008228] [Cited by in Crossref: 47] [Cited by in F6Publishing: 50] [Article Influence: 15.7] [Reference Citation Analysis]
51 Morita T, Koda H, Okanoya K, Tachibana RO. Measuring context dependency in birdsong using artificial neural networks.. [DOI: 10.1101/2020.05.09.083907] [Cited by in Crossref: 3] [Cited by in F6Publishing: 3] [Article Influence: 1.0] [Reference Citation Analysis]
52 Goffinet J, Brudner S, Mooney R, Pearson J. Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires.. [DOI: 10.1101/811661] [Cited by in Crossref: 9] [Cited by in F6Publishing: 9] [Article Influence: 2.3] [Reference Citation Analysis]