Quantitative Consulting has many interesting research publications. Some of them were completed for the company purposes and others were outcomes of the academic research of our employees. Some of the content is accessible only after registration of your email address, rest of the content is downloadable from this site.

We hope you will find what you are looking for!

correlationMCMCrecovery rateregulatory capitalHolt methodExponential smoothingextreme value theoryInvestment Strategylogistic regressionvalue at riskAsset price jumpsbasis spread

probability measure, weak convergence

Weak convergence in D[0,1]. Presentation for PhD. seminar Stochastic modeling in economics and finance, MFF UK, 2008. Presentation topic is weak convergence in space D[0,1].

10.11.2008

constant maturity swap, convexity adjustment, interest rate derivatives, Libor in arrears, valuation models

Valuation of Convexity Related Derivatives. IES WP 4/2008. We will investigate valuation of derivatives with payoff defined as a nonlinear though close to linear function of tradable underlying assets. Derivatives involving Libor or swap rates in arrears, i.e. rates paid in a wrong time, are a typical example. It is generally tempting to replace the future unknown interest rates with the forward rates. We will show rigorously that indeed this is not possible in the case of Libor or swap rates in arrears. We will get a precise fully analytical formula applicable to a wide class of convexity related derivatives. We will illustrate the techniques and different results on a case study of a real life controversial exotic swap.

01.03.2008

financial derivatives, Mathematica

Use of Mathematica in the evaluation of financial derivatives. Bachelor thesis, MFF UK. In the theoretical part the work deals with the definition, classification and use of financial derivatives in general terms. For the valuation of European options is described Black-Scholes formula and binomial trees model for American options. The practical part are valuation models using system Mathematica and individual models were compared among themselves, including the problem of convergence of each numerical procedure.

25.06.2004

correlation, recovery rate, regulatory capital

Survival Analysis in LGD Modeling. IES Working Paper. The paper proposes an application of the survival time analysis methodology to estimations of the Loss Given Default (LGD) parameter. The main advantage of the survival analysis approach compared to classical regression methods is that it allows exploiting partial recovery data. The model is also modified in order to improve performance of the appropriate goodness of fit measures. The empirical testing shows that the Cox proportional model applied to LGD modeling performs better than the linear and logistic regressions. In addition a significant improvement is achieved with the modified “pseudo” Cox LGD model.

01.02.2010

credit scoring, logistic regression, peer to peer lending, Support vector machines

Quantitative methods to assess the creditworthiness of the loan applicants are vital for the protability and the transparency of the lending business. With the total loan volumes typical for traditional nancial institutions, even the slightest improvement in credit scoring models can translate into substantial additional prot. Yet for the regulatory reasons and due to the potential model risk, banks tend to be reluctant to replace the logistic regression as an industrial standard with the new algorithms. This does not stop researchers from examining such new approaches, though. This thesis discusses the potential of the support vector machines, to become an alternative to logistic regression in credit scoring. Using

the real-life credit data set obtained from the P2P lending platform Bondora, the scoring models were built to compare the discrimination power of support vector machines against the traditional approach. The results of the comparison were

ambiguous. The linear support vector machines performed worse than logistic regression and their training consumed much more time. On the other hand, support vector machines with non-linear kernel performed better than logistic

regression and the dierence was statistically signicant at 95% level. Despite this success, several factors prevent SVM from the widespread applications in credit scoring, higher training times and lower robustness of the method being two of the major drawbacks. Considering the alternative algorithms which became available in the last 10 years, support vector machines cannot be recommended

as a standalone method for credit risk models.

24.08.2014

exponential utility, optimal portfolio, portfolio optimization

Speech on a doctoral seminar Stochastic modelling in economics and finance, MFF UK, 2007. The topic is searching for an investment portfolio with a maximal expected investor's utility, exponential utility and relative entropy.

05.11.2007

LIBOR market model, short-rate models, stochastic models of interest rates

Stochastic interest rates modeling, Diploma thesis, MFF UK. This thesis studies different stochastic models of interest rates. Theoretical part describes short-rate models, HJM framework and LIBOR Market model. It focuses in detail on widely known short-rate models, i.e. Vašícek, Hull-White and Ho-Lee model, and on LIBOR Market model. This part ends by valuation of interest rate options and model calibration to real data. Analytical part analyses valuation of real non-standard interest rate derivative using different models. Part of this derivative valuation is comparison among models in terms of general valuation and also in terms of capturing the dynamics of interest rates. The aim of this thesis is to describe different stochastic models of interest rates and mainly to compare them with each other.

11.04.2011

credit risk, logistic regression, scoring models

Step by Step Credit Risk Model Construction. Bachelor thesis, MFF UK. The aim of the present work is to outline a principle of scoring models construction. Describtion of the logistic regression method, its parameters estimation and their significance testing. On the ground of odds ratio variables it defines the Independence model as an estimate of the conditional odds of client’s ability to pay. It generalizes this model by adding individual weights to groups and categories of clients characteristic. Using this way it comes to the WOE model and Full logistic model. This work also studies the way of measuring the diversification power of the models by the Lorenz curve and Somer’s d statistics as an estimate of the Gini coefficient. It applies the described methods to the practical scoring model construction. On a real data there are compared suitability and diversification power of the introduced models.

29.05.2008

statistical error, survey sampling

Summary of statistical errors in survey sampling estimation methods. Published in yearbook Statistika ČSÚ, 2005. In this article several possible procedures appropriate for estimation of statistical error in case of relative frequencies are studied. It focuses on different types of suitable probability distributions (hypergeometric, binomial, Poisson) including possible approximations (normal distribution). It suggests also alternative methods for error estimation (bootstrap and jacknife) and methods for error estimation in case of cluster sampling . The present work also deals with error estimation of other commonly used characteristics as averages and standard deviations.

22.07.2005

data weighing, polls on voting intention, quota sample, sampling surveys, statistical error

Statistical error in representative samples from population, Bachelor thesis, MFF UK.This thesis deals with statistical error estimation in sampling surveys. The aim was to find corrections of statistical error estimations in the situations where the data are weighted or where the data originate from quota samples. It is shown using theoretical considerations to derive more accurate statistical error estimation in the case of quota sample with one quota variable. Test of the validity of adjusted estimate using simulations is shown in the case of weighed data we. Application and comparison of three different methods is demonstrated on the real poll model data and construction of empirical error estimates

27.05.2010

economy, inflation tax, money issue, seigniorage

Seigniorage in Continuous Time. Article published in Politická ekonomie magazine, 2004. The government is able to acquire real goods through printing money. Because government does not create wealth through printing money, this revenue, the seigniorage, is at the expense of the public, as the purchasing power of monetary units decreases because of the issue of new money. The authors use the model of auctions to which the public comes with their money and the government with the newly issued money. The value of goods acquired by the government in such an auction equals the newly printed money divided by the sum of the newly printed money and the money spent by the public. Upon this auction model, the authors develop the formula for seigniorage in continuous time. The seigniorage calculated in this way is lower than the seigniorage calculated upon the assumption of discrete changes in economic variables.

01.07.2004

Basel II, Basel III, risk management

Presentation from the Breakfest with SAS held on 28th April, 2010. After an overview of the key risk management principles and tools the presentation focuses on challenging tasks related to Basel II implementation, in particular LGD, EAD, and correlation estimates. The last part then looks on the latest BCBS proposals leading to a Basel III regulatory package trying to learn a lesson from the recent financial crisis.

28.04.2010

loss given default, Recovery rates, retail lending, survival analysis

The bank regulation embodied in the Basel II Accord has opened-up a new era in estimating recovery rates or complementary loss given default in retail lending credit evaluation process. In this paper we investigate the properties of survival analysis models applied for recovery rates in order to predict loss given default for retail lending. We compare the results to standard techniques such as linear and logistic regressions and discuss the pros and cons of the respective methods. The study is performed on a real dataset of a major Czech bank.

21.10.2013

Asset price jumps, High-frequency trading, Investment Strategy, L-Estimator, Momentum trading

The profitability of a trading system based on the momentum-like effects of price jumps was tested on the time series of 7 assets (EUR/USD, GBP/USD, USD/CHF and USD/JPY exchange rates and Light Crude Oil, E-Mini S&P 500 and VIX Futures), in each case for 7 different frequencies (ranging from 1-Minute to 1-Day), over a period of more than 20 years (for all assets except for the VIX) ending in the second half of 2015. The proposed trading system entered long and short trades in the direction of price jumps, for the closing price of the period in which the jump occurred. The position was held for a fixed number of periods that was optimized on the in-sample period. Jumps were identified with the non-parametric L-Estimator whose inputs (period used for local volatility calculation and confidence level used for jump detection) were also optimized on the in-sample period. The proposed system achieved promising results for the 4 currency markets, especially at the 15-minute and 30-minute frequencies at which 3 out of the 4 tested currencies turned profitable (with highest profits achieved by USD/CHF, followed by EUR/USD and GBP/USD), with the profits totaling up to 30-50% p.a. in the case of a high-leverage scenario, or 15-25% in the case of a low-leverage scenario. Additionally, the 5-minute frequency turned profitable for USD/CHF and the 4-hour frequency for GBP/USD, while the 1-minute frequency was unprofitable in all cases due to the commissions and the 1-day frequency contained too few jumps to make any conclusions. As for the futures markets, the system achieved profits only on the Light Crude Oil market, on the frequencies of 1-hour, 4-hour and 1-day, with the profits totaling up to 20% p.a. in the case of high leverage or 10% p.a. in the case of low leverage. For USD/JPY, E-Mini S&P 500 Futures and VIX Futures the system achieved mostly a loss. We attribute this (in the latter two cases) to the effect of a rising market risk premium in the case of negative jumps, going against the jump-momentum effect used by the system.

04.08.2017

annual average concentration, mean estimation

Problem of estimation annual average concentration of radon in building. Seminar work on subject Econometrics, MFF UK. Aim of this work is to construct estimation of annual average concentration of radon in building based on shorter observations. Available were real data from the state office for radiation protection (concentration of radon in observed building and basic meteorological data).

01.01.2006

Credit risk modeling, default risk, firm-value models, portfolio problems, reduced-form models

The thesis covers a wide range of topics from the credit risk modeling with the emphasis put on pricing of the claims subject to the default risk. Starting with a separate general contingent claim pricing framework the key topics are classified into three fundamental parts: firm-value models, reduced-form models, portfolio problems, with a possible finer sub-classification. Every part provides a theoretical discussion, proposal of self-developed methodologies and related applications that are designed so as to be close to the real-world problems.

The text also reveals several new findings from various fields of credit risk modeling. In particular, it is shown (i) that the stock option market is a good source of credit infor- mation, (ii) how the reduced-form modeling framework can be extended to capture more complicated problems, (iii) that the double t copula together with a self-developed port- folio modeling framework outperforms the classical Gaussian copula approaches. Many other, partial findings are presented in the relevant chapters and some other results are also discussed in the Appendix.

03.08.2017

basis spread, basis swap, collateral, Cross-currency swap, discount factor, overnight indexed swap, swap spreads

In this study we analyse relationship between classical approach to valuation of linear interest rate derivatives and post-crisis approach when the valuation better reflects credit and liquidity risk and economic costs of the transaction on top of the risk-free rate. We discuss the method of collateralization to diminish counterparty credit risk, its impact on derivatives pricing, and how overnight indexed swap (OIS) rates became market standard for discounting future derivatives’ cash flows. We show that using one yield curve to both estimating the forward rates and discounting the expected future cash flows is no longer possible in arbitrage free market. We review in detail three fundamental interest rate derivatives (interest rate swap, basis swap and cross-currency swap) and we derive discount factors used for calculating the present value of expected future cash flows that are consistent with market quotes. We also investigate drivers behind basis spreads, in particular, credit and liquidity risk, and supply and demand forces, and show how they impact valuation of derivatives. We analyse Czech swap rates and propose an estimation of CZK OIS curve and approximate discount rates in case of cross-currency swaps. Finally, we discuss inflation markets and consistent valuation of inflation swaps.

02.08.2017

CreditMetrics, CreditRisk+, KMV, Monte Carlo simulation, risk modeling

Portfolio Credit Risk Modeling, Diploma thesis, University of Economics. This thesis focuses on state-of-the-art credit models largely implemented by banks into their banking risk-assessment and complementary valuation system frameworks. Reader is provided in general with both theoretical and applied (practical) approaches that are giving a clear notion how selected portfolio models perform in real-world environment. Our study comprises CreditMetrics, CreditRisk+ and KMV model. In the first part of the thesis, our intention is to clarify theoretically main features, modeling principles and moreover we also suggest hypotheses about strengths/drawbacks of every scrutinized model. Subsequently, in the applied part we test the models in a lab-environment but with real-world market data. Noticeable stress is also put on model calibration. This enables us to con_rm/reject the assumptions we made in the theoretical part.

14.09.2010

nonadditivity tests, nonlinear time series, nonlinearity tests

The thesis concentrates on property of linearity in time series models, its definitions and possibilities of testing. Presented tests focus mainly on the time domain; these are based on various statistical methods such as regression, neural networks and random fields. Their implementation in R software is described. Advantages and disadvantages for tests, which are implemented in more than one package, are discussed. Second topic of the thesis is additivity in nonlinear models. The definition is introduced as well as tests developed for testing its presence. Several test (both linearity and additivity) have been implemented in R for purposes of simulations. The last chapter deals with application of tests to real data.

30.01.2014

implied volatility, model-free volatility, model-free volatility., realized volatility, Volatility forecasting

In this paper we are testing the forecasting abilities and the information content of 8 popular models of volatility forecasting (EWMA, GARCH, FIGARCH, ARIMA-RV, ARFIMA-RV, HAR-RV, Black-Scholes implied volatility and model-free volatility). The models are applied to 5 years of daily data about the evolution of the EUR/USD exchange rate in order to forecast the realized volatility in 1 day, 5 day and 20 day horizon. The best forecasting results were archieved by the models based on option oprices (Black-Scholes implied volatility and Model-free volatility), followed by the realized volatility models incorporating long memory (ARFIMA-RV and HAR-RV). From these models ARFIMA-RV has dominated the others especially on the longer horizons, where it surpassed even the option models. The tests of the information content showed that the option models do not subsume all of the information contained in the econometric models (ARFIMA and HAR). Because of that we created several hybrid models (using option as well as time series forecasts) that archieved on average better results than any of the basic models on their own.

01.12.2012