The argument of this paper will be to demonstrate the usefulness of an intertextual reading of big datasets, in order to compensate for the inability from quantitative and automated approaches to fulfill out-of-reach promises, especially regarding the unavoidable issue of subjectivity in the act of confront-ing one’s self to a text. We will show that the business origin of the big data notion is in part responsible for its definition being more of a marketing nature than a scholarly one, in as much as some authors keep looking for a distinctive feature other than its digital medium. We will thus challenge any claim both to statistical representativeness and to objectivity in automated quantitative analysis of big data, partly due to an unfortunate confusion between simulation of reality and the said reality. Moreover, we will reject any necessity to import into the humanities and the social sciences the computing-related definitions of the “data” and “information” notions, in order better to take into account the incremental and iterative essence of the data analysis process. Last but not least, we will offer some thoughts about technology fetishism in the digital humanities, and the hegemonic trend it creates, so as to counter it with our intertextual reading proposal and thus to foster some more qualitative approaches to digital fieldworks.