1.Compare Pentaho Tableau
Criteria | Pentaho | Tableau |
Functionality | ETL, OLAP, & static Reports | Data analytics |
Availability | Open source | Proprietary |
Strengths | Data Integration | Interactive visualizations |
2.Define Pentaho and its usage.
Revered as one of the most efficient and resourceful data integration tools (DI), Pentaho virtually supports all available data sources and allows scalable data clustering and data mining. It is a light-weight Business Intelligence suite executing Online Analytical Processing (OLAP) services, ETL functions, reports and dashboards creation and other data-analysis and visualization operations.
3.Explain the important features of Pentaho.
4. Name major applications comprising Pentaho BI Project.
5. What is the importance of metadata in Pentaho?
A metadata model in Pentaho formulates the physical structure of your database into a logical business model. These mappings are stored in a central repository and allow developers and administrators to build business-logical DB tables that are cost effective and optimized. It further simplifies the working of business users allowing them to create formatted reports and dashboards ensuring security to data access.
All in all, metadata model provides an encapsulation around the physical definitions of your database and the logical representation and define relationships between them.
6.Define Pentaho Reporting Evaluation.
Pentaho Reporting Evaluation is a particular package of a subset of the Pentaho Reporting capabilities, designed for typical first-phase evaluation activities such as accessing sample data, creating and editing reports, and viewing and interacting with reports.
7.Explain the benefits of Data Integration.
8. What is MDX and its usage?
MDX is an acronym for ‘Multi-Dimensional Expressions,’ the standard query language introduced by Microsoft SQL OLAP Services. MDX is an imperative part of XML for analysis API, which has a different structure than SQL. A basic MDX query is:
SELECT {[Quantity].[Unit Sales], [Quantity].[Store Sales]} ON COLUMNS,
{[Product].members} ON ROWS
FROM [Sales]
WHERE [Time].[1999].[Q2]
9.Define three major types of Data Integration Jobs.
10.Illustrate the difference between transformations and jobs.
While transformations refer to shifting and transforming rows from source system to target system, jobs perform high level operations like implementing transformations, file transfer via FTP, sending mails, etc.
Another significant difference is that the transformation allows parallel execution whereas jobs implement steps in order.
11.How to perform database join with PDI (Pentaho Data Integration)?
PDI supports joining of two tables form the same databse using a ‘Table Input’ method, performing the join in SQL only.
On the other hand, for joining two tables in different databases, users implement ‘Database Join’ step. However, in database join, each input row query executes on the target system from the main stream, resulting in lower performance as the number of queries implement on the B increases.
To avoid the above situation, there is yet another option to join rows form two different Table Input steps. You can use ‘Merge Join ‘step, using the SQL query having ‘ORDER BY’ clause. Remember, the rows must be perfectly sorted before implementing merge join.
12.Explain how to sequentialize transformations?
Since PDI transformations support parallel execution of all the steps/operations, it is impossible to sequentialize transformations in Pentaho. Moreover, to make this happen, users need to change the core architecture, which will actually result in slow processing.
13.Explain Pentaho Reporting Evaluation.
Pentaho Reporting evaluation is a complete package of its reporting abilities, activities and tools, specifically designed for first-phase evaluation like accessing the sample, generating and updating reports, viewing them and performing various interactions. This evaluation consists of Pentaho platform components, Report Designer and ad hoc interface for reporting used for local installation.
14.Can fieldnames in a row duplicated in Pentaho?
No, Pentaho doesn’t allow field duplication.
15.Does transformation allow filed duplication?
“Select Values” will rename a field as you select the original field also. The original field will have a duplicate name of the other field now.
16.How to use database connections from repository?
You can either create a new transformation/job or close and reopen the ones already loaded in Spoon.
17.Explain in brief the concept Pentaho Dashboard.
Dashboards are the collection of various information objects on single page including diagrams, tables and textual information. The Pentaho AJAX API is used to extract BI information while Pentaho Solution Repository contains the content definitions.
18.The steps involved in Dashboard creation include
Transformation logic can be shared using subtransformations, which provides seamless loading and transformation of variables enhancing efficiency and productivity of the system. Subtransformations can be called and reconfigured when required.
19. Explain the use of Pentaho reporting.
Pentaho reporting enables businesses to create structured and informative reports to easily access, format and deliver meaningful and important information to clients and customers. They also help business users to analyze and track consumer’s behavior for the specific time and functionality, thereby directing them towards the right success path.
20.What is Pentaho Data Mining?
Pentaho Data Mining refers to the Weka Project, which consists of a detailed tool set for machine learning and data mining. Weka is open source software for extracting large sers of information about users, clients and businesses. It is built on Java programming.
21.Is Data Integration and ETL Programming same?
No. Data Integration refers to passing of data from one type of systems to other within the same application. On the contrary, ETL is used to extract and access data from different sources. And transform it into other objects and tables.
22.Explain Hierarchy Flattening.
It is just the construction of parent child relationships in a database. Hierarchy Flattening uses both horizontal and vertical formats, which enables easy and trouble-free identification of sub elements. It further allows users to understand and read the main hierarchy of BI and includes Parent column, Child Column, Parent attributes and Child attributes.
23.Explain Pentaho report Designer (PRD).
PRD is a graphic tool to execute report-editing functions and create simple and advanced reports and help users export them in PDF, Excel, HTML and CSV files. PRD consists of Java-based report engine offering data integration, portability and scalability. Thus, it can be embedded in Java web applications and also other application servers like Pentaho BAserver.
24.Define Pentaho Report types.
There are several categories of Pentaho reports :
25.What are variables and arguments in transformations?
Transformations dialog box consists of two different tables: one of arguments and the other of variables. While arguments refer to command line specified during batch processing, PDI variables refer to objects that are set in a previous transformation/job in the OS.
26.How to configure JNDI for Pentaho DI Server?
Pentaho offers JNDI connection configuration for local DI to avoid continuous running of application server during the development and testing of transformations. Edit the properties in jdbc.propertiesfile located at…\data-integration-server\pentaho-solutions\system\simple-jndi.
27.Is Pentaho a Trademark?
Yes, Pentaho is a trademark.
28.Explain MDX?explain
Multidimensional Expressions (MDX) is a query language for OLAP databases, much like SQL is a query language for relational databases. It is also a calculation language, with syntax similar to spreadsheet formulas.
29.Define Tuple?
Finite ordered list of elements is called as tuple.
30.What kind of data, cube contain?
The Cube will contain the following data:
31.Differentiate between transformations and jobs?
Transformations is moving and transforming rows from source to target.
Jobs are more about high level flow control.
32.How to do a database join with PDI?
If we want to join 2 tables from the same database, we can use a “Table Input” step and do the join in SQL itself.
If we want to join 2 tables that are not in the same database. We can use the the “Database Join”.
33.How to sequentialize transformations?
it is not possible as in PDI transformations all of the steps run in parallel. So we can’t sequentialize them.
34.How we can use database connections from repository?
We can Create a new conversion or close and re-open the ones we have loaded in Spoon.
34.How do you insert booleans into a MySql database, PDI encodes a boolean as ‘Y’ or ‘N’ and this can’t be insert into a BIT(1) column in MySql.
BIT is not a standard SQL data type. It’s not even standard on MySQL as the meaning (core definition) changed from MySQL version 4 to 5.
Also a BIT uses 2 bytes on MySQL. That’s why in PDI we made the safe choice and went for a char(1) to store a boolean. There is a simple workaround available: change the data type with a Select Values step to “Integer” in the metadata tab. This converts it to 1 for “true” and 0 for “false”, just like MySQL expects.
35.By default all steps in a transformation run in parallel, how can we make it so that 1 row gets processed completely until the end before the next row is processed?
This is not possible as in PDI transformations all the steps run in parallel. So we can’t sequentialize them. This would require architectural changes to PDI and sequential processing also result in very slow processing.
36.Why can’t we duplicate fieldnames in a single row?
we can’t. if we have duplicate fieldnames. Before PDI v2.5.0 we were able to force duplicate fields, but also only the first value of the duplicate fields could ever be used.
37.What are the benefits of Pentaho?
38.Differentiate between Arguments and variables?
Arguments:
Arguments are command line arguments that we would normally specify during batch processing.
variables:
Variables are environment or PDI variables that we would normally set in a previous transformation in a job.
39.What are the applications of Pentaho?
i)Suite Pentaho
ii)All build under Java platform
40.Define Pentaho Schema Workbench?
Pentaho Schema Workbench offers a graphical edge for designing OLAP cubes for Pentaho Analysis.
41.Brief about Pentaho Report designer?
It is a visual, banded report writer. It has various features lilke using subreports, charts and graphs etc.
42.What do you un derstand by the term ETL?
It is an entri level tool for data manipulation.
43.Explain Encrypting File system?
It is the technology which enables files to be transparently encrypted to secure personal data from attackers with physical access to the computer.
44.What is ETL process? Write the steps also?.
ETL is extraction , transforming , loading process the steps are :
1 – define the source
2 – define the target
3 – create the mapping
4 – create the session
5 – create the work flow
45.What is metadata?
The metadata stored in the repository by associating information with individual objects in the repository.
46.What are snapshots?
Snapshots are read-only copies of a master table located on a remote node which can be periodically refreshed to reflect changes made to the master table.
47.What is data staging?
Data staging is actually a group of procedures used to prepare source system data for loading a data warehouse.
48.Data staging is actually a group of procedures used to prepare source system data for loading a data warehouse.
Full Load means completely erasing the insides of one or more tables and filling with fresh data.
Incremental Load means applying ongoing changes to one or more tables based on a predefined schedule.
49.Define mapping?
Dataflow from source to target is called as mapping.
50.Explain session?
It is a set of instruction which tell when and how to move data from respective source to target.
51.What is Workflow?
It is a set of instruction which tell the infomatica server how to execute the task.
52.Define mapplet?
It creates and configure the set of transformation.
53.What do you understand by three tier data warehouse?
A data warehouse is said to be a three-tier system where a middle system provides usable data in a secure way to end users. Both side of this middle system are the end users and the back-end data stores.
54.What is ODS?
ODS is Operational Data Store which comes in between of data warehouse and staging area.
55.Differentiate between Etl tool and OLAP tool?
ETL Tool is used for extracting data from the legecy system and load it into specified database with some processing of cleansing data.
OLAP Tool is used for reporting process . Here data is available in multidimensional model hence we can write simple query to extract data from database.
56.Wha is XML?
XML is an extensiable markup language which defines a set of rule for encoding documents in both formats which is human readable and machine readable.
57.What are the different versions of infomatica?
Informatica Powercenter 4.1, Informatica Powercenter 5.1, Powercenter Informatica 6.1.2, Informatica Powercenter 7.1.2, etc.
58.What are various tools in ETL?
Abinitio,DataStage, Informatica, Cognos Decision Stream, etc
59.Define MDX?
MDX is multi- dimensional expression which is a main query language implemented by the Mondrains.
60.Define multi-dimensional cube?
It is a cube to view data where we can slice and dice the data. It have time dimension, locations and figures.
61.How do you duplicate a field in a row in a transformation?
Several solutions exist:
Use a “Select Values” step renaming a field while selecting also the original one. The result will be that the original field will be duplicated to another name. It will look as follows:
This will duplicate fieldA to fieldB and fieldC.
Use a calculator step and use e.g. The NLV(A,B) operation as follows:
This will have the same effect as the first solution: 3 fields in the output which are copies of each other: fieldA, fieldB, and fieldC.
Use a JavaScript step to copy the field:
This will have the same effect as the previous solutions: 3 fields in the output which are copies of each other: fieldA, fieldB, and fieldC.
62.We will be using PDI integrated in a web application deployed on an application server. We’ve created a JNDI datasource in our application server. Of course Spoon doesn’t run in the context of the application server, so how can we use the JNDI data source in PDI?
If you look in the PDI main directory you will see a sub-directory “simple-jndi”, which contains a file called “jdbc.properties”. You should change this file so that the JNDI information matches the one you use in your application server.
After that you set in the connection tab of Spoon the “Method of access” to JNDI, the “Connection type” to the type of database you’re using. And “Connection name” to the name of the JDNI datasource (as used in “jdbc.properties”).
63.The Text File Input step has a Compression option that allows you to select Zip or Gzip, but it will only read the first file in Zip. How can I use Apache VFS support to handle tarballs or multi-file zips?
The catch is to specifically restrict the file list to the files inside the compressed collection. Some examples:
You have a file with the following structure:
access.logs.tar.gz
access.log.1
access.log.2
access.log.3
To read each of these files in a File Input step:
File/Directory | Wildcard |
tar:gz:/path/to/access.logs.tar.gz!/access.logs.tar! | .+ |
Note: If you only wanted certain files in the tarball, you could certainly use a wildcard like access.log..* or something. .+ is the magic if you don’t want to specify the children filenames. .* will not work because it will include the folder (i.e. tar:gz:/path/to/access.logs.tar.gz!/access.logs.tar!/ )
You have a simpler file, fat-access.log.gz. You could use the Compression option of the File Input step to deal with this simple case, but if you wanted to use VFS instead, you would use the following specification:
File/Directory | Wildcard |
gz:file://c:/path/to/fat-access.log.gz! | .+ |
Finally, if you have a zip file with the following structure:
access.logs.zip/
a-root-access.log
subdirectory1/
subdirectory-access.log.1
subdirectory-access.log.2
subdirectory2/
subdirectory-access.log.1
subdirectory-access.log.2
You might want to access all the files, in which case you’d use:
File/Directory | Wildcard |
zip:file://c:/path/to/access.logs.zip! | a-root-access.log |
zip:file://c:/path/to/access.logs.zip!/subdirectory1 | subdirectory-access.log.* |
zip:file://c:/path/to/access.logs.zip!/subdirectory2 | subdirectory-access.log.* |
Note: For some reason, the .+ doesn’t work in the subdirectories, they still show the directory entries.