PL-300: Microsoft Power BI Data Analyst

38%

Question 131

Which tool enables you to identify bottlenecks that exist in code?
Q&A.
Column profiling.
Performance analyzer.




Answer is Performance analyzer.


Question 132

What is cardinality?
Cardinality is the granularity of the data.
Cardinality is how long it takes for the data to load.
Cardinality is a type of visual element.
Cardinality is a term that is used to describe the uniqueness of the values in a column. Relationship cardinality refers to the number of rows from one table that are related to another (one to one, one to many, many to many).




Answer is Cardinality is a term that is used to describe the uniqueness of the values in a column. Relationship cardinality refers to the number of rows from one table that are related to another (one to one, one to many, many to many).


Question 133

Which Power BI option gives you the option to send fewer queries and disable certain interactions?
Direct query
Query reduction
Query diagnostics




Answer is Query reduction


Question 134

Other than Power BI, another place for performance optimization can be performed is where?
At the data source
In the Power BI service
In Microsoft SharePoint




Answer is At the data source


Question 135

Is it possible to create a relationship between two columns if they are different DATA TYPE columns?
Yes, if cardinality of the relationship is set to Many-to-Many.
Yes, the above is fully supported in latest version of Power BI desktop.
No, both columns in a relationship must be sharing the same DATA TYPE.




Answer is No

It is not possible to create a relationship between two columns if they are different DATA TYPE columns. Both columns in a relationship must be sharing the same DATA TYPE.

Question 136

A critical aspect of data aggregation is that it allows you to focus on what?
The important and most meaningful data
Disabling interactive analysis over big data
Larger cache size and decreased query performance




Answer is The important and most meaningful data


Question 137

Before you start creating aggregations, you should first decide what?
The storage mode of your aggregation
The granularity (level) on which to create them.




Answer is The granularity (level) on which to create them.


Question 138


The Impressions table contains approximately 30 million records per month.
You need to create an ad analytics system to meet the following requirements:
- Present ad impression counts for the day, campaign, and Site_name. The analytics for the last year are required.
- Minimize the data model size.
Which two actions should you perform?
Group the impressions by Ad_id, Site_name, and Impression_date. Aggregate by using the CountRows function.
Create one-to-many relationships between the tables.
Create a calculated measure that aggregates by using the COUNTROWS function.
Create a calculated table that contains Ad_id, Site_name, and Impression_date.




Answers are;
Group the impressions by Ad_id, Site_name, and Impression_date. Aggregate by using the CountRows function.
Create one-to-many relationships between the tables.


Perhaps the most effective technique to reduce a model size is to load pre-summarized data. This technique can be used to raise the grain of fact-type tables. There is a distinct trade-off, however, resulting in loss of detail.

For example, a source sales fact table stores one row per order line. Significant data reduction could be achieved by summarizing all sales metrics, grouping by date, customer, and product. Consider, then, that an even more significant data reduction could be achieved by grouping by date at the month level. It could achieve a possible 99% reduction in model size, but reporting at day level—or individual order level—is no longer possible. Deciding to summarize fact-type data always involves tradeoffs. Tradeoff could be mitigated by a Mixed model design, and this option is described in the Switch to Mixed mode technique.

Reference:
https://docs.microsoft.com/en-us/power-bi/guidance/import-modeling-data-reduction

Question 139

You have a Microsoft Power BI report. The size of PBIX file is 550 MB. The report is accessed by using an App workspace in shared capacity of powerbi.com.
The report uses an imported dataset that contains one fact table. The fact table contains 12 million rows. The dataset is scheduled to refresh twice a day at 08:00 and 17:00.
The report is a single page that contains 15 AppSource visuals and 10 default visuals.
Users say that the report is slow to load the visuals when they access and interact with the report.
You need to recommend a solution to improve the performance of the report.

What should you recommend?
Change any DAX measures to use iterator functions.
Replace the default visuals with AppSource visuals.
Change the imported dataset to DirectQuery.
Remove unused columns from tables in the data model.




Answer is Change the imported dataset to DirectQuery.

DirectQuery: No data is imported or copied into Power BI Desktop.
Import: The selected tables and columns are imported into Power BI Desktop. As you create or interact with a visualization, Power BI Desktop uses the imported data.

Benefits of using DirectQuery
There are a few benefits to using DirectQuery:
- DirectQuery lets you build visualizations over very large datasets, where it would otherwise be unfeasible to first import all the data with pre-aggregation.
- Underlying data changes can require a refresh of data. For some reports, the need to display current data can require large data transfers, making reimporting data unfeasible. By contrast, DirectQuery reports always use current data.
The 1-GB dataset limitation doesn't apply to DirectQuery.
Note:
There are several versions of this question in the exam. The question can have other incorrect answer options, include the following:
- Implement row-level security (RLS)
- Increase the number of times that the dataset is refreshed.

Reference:
https://docs.microsoft.com/en-us/power-bi/connect-data/desktop-use-directquery

Question 140

You have a large dataset that contains more than 1 million rows. The table has a datetime column named Date.
You need to reduce the size of the data model without losing access to any data.

What should you do?
Round the hour of the Date column to startOfHour.
Change the data type of the Date column to Text.
Trim the Date column.
Split the Date column into two columns, one that contains only the time and another that contains only the date.




Answer is Split the Date column into two columns, one that contains only the time and another that contains only the date.

We have to separate date & time tables. Also, we don't need to put the time into the date table, because the time is repeated every day.
Split your DateTime column into a separate date & time columns in fact table, so that you can join the date to the date table & the time to the time table. The time need to be converted to the nearest round minute or second so that every time in your data corresponds to a row in your time table.

Reference:
https://intellipaat.com/community/6461/how-to-include-time-in-date-hierarchy-in-power-bi
https://apexinsights.net/blog/top-5-tips-to-optimise-data-model

< Previous PageNext Page >

Quick access to all questions in this exam