Abstract
Judgment analysis in a crowdsourcing environment refers to seeking opinions from a diverse and large set of online annotators for various applications and making a consensus out of them. The existing approaches to judgment analysis work only in the static scenario where all the opinions are available beforehand. We aim to develop a generalized approach of judgment analysis in a streaming setting, where the questions are available but opinions received from the annotators are streaming in. The current paper provides the very first algorithm that can perform judgment analysis on crowdsourced opinions received in streams. We demonstrate the performance of the proposed approach on two datasets achieving an accuracy closer to the majority voting, although the space requirement of our approach is significantly better. The required space is bounded by a logarithmic factor of the number of annotators.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Amer-Yahia, S., Roy, S.B.: Human factors in crowdsourcing. Proceedings of the VLDB Endowment 9(13), 1615–1618 (2016)
Boudreau, K.J., Lacetera, N., Lakhani, K.R.: Incentives and problem uncertainty in innovation contests: An empirical analysis. Manage. Sci. 57(5), 843–863 (2011)
Boudreau, K.J., Lakhani, K.R.: Using the crowd as an innovation partner. Harv. Bus. Rev. 91(4), 60–9 (2013)
Buecheler, T., Sieg, J.H., Füchslin, R.M., Pfeifer, R.: Crowdsourcing, open innovation and collective intelligence in the scientific method: a research agenda and operational framework. In: The 12th International Conference on the Synthesis and Simulation of Living Systems, Odense, Denmark, 19-23 August 2010. pp. 679–686. MIT Press (2010)
Chatterjee, S., Bhattacharyya, M.: Judgment analysis of crowdsourced opinions using biclustering. Inf. Sci. 375, 138–154 (2017)
Chatterjee, S., Mukhopadhyay, A., Bhattacharyya, M.: A review of judgment analysis algorithms for crowdsourced opinions. IEEE Trans. Knowl. Data Eng. 32(7), 1234–1248 (2019)
Datar, M., Gionis, A., Indyk, P., Motwani, R.: Maintaining stream statistics over sliding windows. SIAM J. Comput. 31(6), 1794–1813 (2002)
Dustdar, S., Gaedke, M.: The social routing principle. IEEE Internet Comput. 15(4), 80–83 (2011)
Gino, F., Staats, B.R.: The microwork solution. Harvard Business Review 90(12), 92–+ (2012)
Ipeirotis, P.G.: Analyzing the amazon mechanical turk marketplace. XRDS: Crossroads, The ACM magazine for students 17(2), 16–21 (2010)
Kittur, A., Nickerson, J.V., Bernstein, M., Gerber, E., Shaw, A., Zimmerman, J., Lease, M., Horton, J.: The future of crowd work. In: Proceedings of the 2013 Conference on Computer Supported Cooperative Work. pp. 1301–1318 (2013)
Leimeister, J.M., Huber, M., Bretschneider, U., Krcmar, H.: Leveraging crowdsourcing: activation-supporting components for it-based ideas competition. J. Manag. Inf. Syst. 26(1), 197–224 (2009)
Li, B., Cheng, Y., Yuan, Y., Yang, Y., Jin, Q., Wang, G.: Acta: Autonomy and coordination task assignment in spatial crowdsourcing platforms. Proceedings of the VLDB Endowment 16(5), 1073–1085 (2023)
Liu, T.X., Yang, J., Adamic, L.A., Chen, Y.: Crowdsourcing with all-pay auctions: A field experiment on taskcn. Manage. Sci. 60(8), 2020–2037 (2014)
Muthukrishnan, S., et al.: Data streams: Algorithms and applications. Foundations and Trends® in Theoretical Computer Science 1(2), 117–236 (2005)
Paolacci, G., Chandler, J., Ipeirotis, P.G.: Running experiments on amazon mechanical turk. Judgm. Decis. Mak. 5(5), 411–419 (2010)
Rubinfeld, R., Shapira, A.: Sublinear time algorithms. SIAM J. Discret. Math. 25(4), 1562–1588 (2011)
Snow, R., O’connor, B., Jurafsky, D., Ng, A.Y.: Cheap and fast–but is it good? evaluating non-expert annotations for natural language tasks. In: Proceedings of the 2008 conference on empirical methods in natural language processing. pp. 254–263 (2008)
Sorokin, A., Forsyth, D.: Utility data annotation with amazon mechanical turk. In: 2008 IEEE computer society conference on computer vision and pattern recognition workshops. pp. 1–8. IEEE (2008)
Tang, J.C., Cebrian, M., Giacobe, N.A., Kim, H.W., Kim, T., Wickert, D.B.: Reflecting on the darpa red balloon challenge. Commun. ACM 54(4), 78–85 (2011)
Varshney, L.R., Agarwal, S., Chee, Y.M., Sindhgatta, R.R., Oppenheim, D.V., Lee, J., Ratakonda, K.: Cognitive coordination of global service delivery. arXiv preprint arXiv:1406.0215 (2014)
Yang, Y., Cheng, Y., Yuan, Y., Wang, G., Chen, L., Sun, Y.: Privacy-preserving cooperative online matching over spatial crowdsourcing platforms. Proceedings of the VLDB Endowment 16(1), 51–63 (2022)
Yu, L.: Daren c. brabham: Crowdsourcing: The mit press, 2013, 176 pp, 12.95, isbn9780262518475 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Data and Code Availability
Both the datasets, namely Fact Evaluation and Sentiment Analysis, analyzed in this paper along with the source codes of the algorithms applied on them are freely accessible from the GitHub link: https://github.com/malaybhattacharyya/Judgment_Streaming_Logarithmic.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Patel, J., Mandal, S., Bhattacharyya, M. (2025). Judgment Analysis in a Streaming Setting Incurring Logarithmic Space of the Annotators. In: Antonacopoulos, A., Chaudhuri, S., Chellappa, R., Liu, CL., Bhattacharya, S., Pal, U. (eds) Pattern Recognition. ICPR 2024. Lecture Notes in Computer Science, vol 15324. Springer, Cham. https://doi.org/10.1007/978-3-031-78383-8_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-78383-8_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-78382-1
Online ISBN: 978-3-031-78383-8
eBook Packages: Computer ScienceComputer Science (R0)