Yes. Scientists have been running big data experiments for last many decades on clusters and supercomputers using techniques like openMP, MPI,openCL etc on Fortran/C/C++/Python. Just that they did not go ahead and brand themselves up as "big-data" specialists like "enterprise" people. CERN for example processes more data than 90% of Hadoop users using its custom frameworks I think.
Hadoop is good for solving a specific type of problems, but it is in no way the only way to handle big data. It is fast going out of favor for the type of problems it isnt exactly good for. Spark, a cluster computing system is fast becoming popular (overtaking hadoop) for Machine Learning and other computation intensive problems on distributed data.
There are hundreds of custom software which handle the problem of 3Vs of big data in different ways.
While it's technically possible for a fresher to start working with ABAP on HANA without a basic knowledge of ABAP, having a solid understanding of ABAP would certainly be beneficial. ABAP on HANA combines ABAP programming with the power of SAP HANA, so having a foundation in ABAP would help in understanding the underlying principles and syntax. It's always advisable to gain a good understanding of the basics before diving into more advanced topics.
Niyajuddin 03 May in IT Courses/Big Data
What would have been the state of BigData had there been no Hadoop?