Andreas Gieshoff founded evaluaid in the year 2019. Prior to his career as an
independent consultant for statistics Andreas Gieshoff worked in an executive
internal consulting role at IQVIA, the leading human data science company.
Our experience comprises not only a broad array of statistical methodologies but also their application in real world scenarios of national and international reach.
Our consulting approach respects two basic principles:
Quality principle: We evaluate whether the available data was sampled with
minimal systematic bias and was quality controlled with appropriate diligence.
Infomation-value principle: We determine as to what extend the information embedded in the available data is in line with user expectations.
As consultants for statistics we adopt the user perspective and work according to our core values:
objective - scientific - neutral
Due to our long-term engagement in health care information and human data science we offer a wide spectrum of competencies.
Efficient and precise analysis of your data assets
Holistic evaluation of your data within their specific market context
Competent partners at each stage of your project
The evaluaid solutions consist of three fields:
You have questions concerning certain statistical methods or wish to apply them? You want to ask the right questions to your health care data provider so as to better assess the value of such data? Benefit from our competencies in a multitude of statistical topics as well as in the field of pharmaceutical market information. This includes analytical concepts in the e-Commerce domain. Together we’ll find the solution that fits your needs.
You want to refresh your stats knowledge or acquire statistical competence as a team? We offer online courses and in-house seminars, tailored to your requirements.
You know this situation: Your sales representative objects the performance evaluation and casts doubts on the validity of your market data. More often than not this is the starting point of a time consuming discussion within your company with a multitude of staff involved, not the least workers’ councils. We support you in this discussion:
Machine learning is undoubtedly an essential technology for harnessing the ever-growing data pools. This applies to both scientific and commercial applications. In a recent paper[1], Sarash Kapoor and Arvid Narayana draw attention to a problem called “leakage” in ML-based studies. Leakage, be it data leakage or leakage in features, leads to an overestimation of model performance. In their article, Kapoor and Narayana provide a list of scientific papers where leakage was detected. A comparable overview is currently not available for commercial ML applications. The paper by Kapoor and Narayana is a recommended read for all those interested in ML. It helps the addressees of ML evaluations to ask the right questions to model builders and data scientists. [1] Kapoor S., Naravanan A.: Leakage and the Reproducability Crisis in ML-based Science. Draft Paper. Center for Information Technology Policy at Princeton University. 2022. https://reproducible.cs.princeton.edu
MoreReporters Without Borders (RSF), an award-winning international NGO founded in 1985, is dedicated to defending and promoting press freedom. An important part of the NGOs work is the annual publication of the World Press Freedom Index. This index is based on extensive questionnaire work. Here are some key-aspects: 87 qualitative questions arranged in 6 umbrella categories like e.g. transparency, media independence, legislative framework. An additional quantitative category measures the level of abuses and violence The questionnaire is translated into 20 languages including Chinese and Russian, and completed for 180 countries. Most questions are presented with 10-point unipolar scales with number labels. However, the scale endpoints are verbalized. Some questions require yes/no answers and another group of questions has fully verbalized scales with 4 response options. According to RSF the questionnaire is completed by several hundred experts like journalists, lawyers, scientists and human rights activists. Indeed – a close look at the questionnaire shows that expert knowledge is evidently required to answer. Here is an example[1]: Rightfully, and as a matter of transparency, RSF clearly states that this survey is not representative according to scientific criteria[2]. Consequently, no inferences are drawn from the results nor are any attempts made to calculate […]
MoreScientific research reports are usually very generously peppered with statistics, most often those of inferential nature. Weissgerber et al. for example examined all original research articles published in June 2017 in the top 25% of physiology journals. There were 328, of which 85% included either a t-test of an analysis of variance[1]. Inferential statistical methods are (also) used to underpin the objectivity and reliability of the studies. This is not a problematic intent at all if the information[2] on such statistical methods are documented and allow their assessment and, if necessary, reproducibility. Weissgerber’s contribution is a committed plea for transparency in statistical procedural matters and a very recommended reading overall. In many cases the application of inferential procedures require data from probability samples where every population element has a known non-zero chance of being included in the sample. However, in certain research settings probability samples are not possible. This is for example the case when the target population is not well known, or study participants are not easily accessible for a survey. In order to gain insights at all in such a setting, it may be necessary to use convenience sampling: Respondents are included based on their accessibility and willingness to […]
MoreWe gather a large, if not a major part of information from the internet. News portals and social media are only a mouse click away and provide information we think we need. This is a very positive side of the internet. A downside, however, is that people under certain circumstances may find themselves in echo chambers where like-minded people group together and listen to information and arguments that are uni-directional and are just replicating and confirming already existing knowledge, views, and opinions. At the heart of all big data analytics projects lies the specification of predictive statistical models. A situation often encountered is that the model is well fitted to the data set which was used to train it. However, if applied to new data the model provides much lower quality. This phenomenon is called over-fitting. The model can just replicate and confirm already seen data and is not able to deal with new data. It cannot be generalized and is useless for predictions. And this is like a statistical echo chamber. Of course, statistical science provides techniques to mitigate over-fitting, such as feature removal, or cross-validation for example. Such techniques, however, do not replace the fundamental requirement to use […]
MoreGet in contact with us.
We're looking forward to your request.
evaluaid
Königsberger Str. 15
65527 Niedernhausen
Germany
You are also welcome to write your request using the form below. We will contact you promptly.