Animal vocalizations are ubiquitous produced by various taxa and represented in all habitats. Tracking and quantifying animal vocalizations is a basic necessity in various biological disciplines such as nature conservation and biomonitoring. With the advancement of digital recording technology, a huge amount of audio recordings is accumulated. Since manual annotation and an analysis of relevant acoustic features is impractical, development of reliable algorithms for an automatic analysis of birdsong is highly required. One of the first challenges in a birdsong analysis is that of segmentation of the acoustic signal, i.e. detection and demarcation of its basic elements or syllables prior to a further analysis. In this study, we present two simple unsupervised algorithms for automatic birdsong segmentation and parameter estimation. The algorithms are based on a smoothed envelope of the short-time energy of the signal, parameters derived from the fundamental frequency and short-time Fourier transform (STFT). The methods were evaluated using a small database of trill vocalizations recorded with high background noise. The algorithms output was compared to manual segmentation carried out by a human expert and to ground truth values obtained by using synthetic signals after which it was realized that they produced highly similar results. In summary, the methods are shown to accurately segment birdsong signals with high background noise levels. Since they are simple to implement, they could be of great benefit to bioacoustics researchers.