Buku "Panduan Definitif untuk Power Query (M) – Jilid 4" melanjutkan eksplorasi mendalam tentang kemampuan bahasa M dalam mengelola dan mengoptimalkan transformasi data. Jilid ini membahas topik-topik lanjutan seperti iterasi dan rekursi, penerapan pola data yang bermasalah, strategi optimasi kinerja termasuk query folding dan firewall, hingga teknik pengembangan ekstensi Power Query untuk membangun konektor khusus.
Disusun secara sistematis dengan contoh kasus, tes formatif, glosarium, dan lampiran, buku ini dirancang untuk membantu pembaca memahami konsep lanjutan sekaligus praktik implementasi di dunia nyata. Kehadiran jilid keempat ini menjadikan seri buku Power Query (M) semakin lengkap sebagai rujukan bagi mahasiswa, dosen, peneliti, dan praktisi data yang ingin menguasai transformasi data secara komprehensif dan profesional.
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer (2015). Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks. Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1 (NIPS’15): https://proceedings.neurips.cc/paper/2015/file/e995f98d56967d946471af29d7bf99f1-Paper.pdf.
Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio (2015). Neural Machine Translation by Jointly Learning to Align and Translate. In 3rd International Conference on Learning Representations. https://arxiv.org/pdf/1409.0473.pdf
Thang Luong, Hieu Pham, and Christopher D. Manning (2015). Effective Approaches to Attention- based Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. https://aclanthology.org/D15-1166/
André F. T. Martins, Ramón Fernandez Astudillo (2016). From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification. In Proceedings of the 33rd International Conference on Machine Learning. http://proceedings.mlr.press/v48/martins16.html
Ben Peters, Vlad Niculae, André F. T. Martins (2019). Sparse Sequence-to-Sequence Models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. https://aclanthology.org/P19-1146/
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin (2017). Attention is All you Need. In Advances in Neural Information Processing Systems. https://papers.nips.cc/paper/2017/hash/3 f5ee243547dee91fbd053c1c4a845aa-Abstract.html
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). https://aclanthology. org/N19-1423/
Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein (2018). Visualizing the Loss Landscape of Neural Nets. In Advances in Neural Information Processing Systems. https://proceedings.neurips.cc/paper/2018/file/ a41b3bb3e6b050b6c9067c67f663b915-Paper.pdf
Sneha Chaudhari, Varun Mithal, Gungor Polatkan, and Rohan Ramanath (2021). An Attentive Survey of Attention Models. ACM Trans. Intell. Syst. Technol. 12, 5, Article 53 (October 2021). https://doi.org/10.1145/3465055
Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. (2020). The M4 Competition: 100,000 time series and 61 forecasting methods. International Journal of Forecasting, Volume 36, Issue 1. Pages 54-74. https://doi.org/10.1016/j.ijforecast.2019.04.014.
Slawek Smyl. (2018). M4 Forecasting Competition: Introducing a New Hybrid ES-RNN Model. https://www.uber.com/blog/m4-forecasting-competition/.
Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. (2020). N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. 8th International Conference on Learning Representations, (ICLR). https://openreview.net/forum?id=r1ecqn4YwB.
Kin G. Olivares and Cristian Challu and Grzegorz Marcjasz and R. Weron and A. Dubrawski. (2022). Neural basis expansion analysis with exogenous variables: Forecasting electricity prices with NBEATSx. International Journal of Forecasting, 2022. https://www.sciencedirect. com/science/article/pii/S0169207022000413.
Cristian Challu and Kin G. Olivares and Boris N. Oreshkin and Federico Garza and Max Mergenthaler-Canseco and Artur Dubrawski. (2022). N-HiTS: Neural Hierarchical Interpolation for Time Series Forecasting. arXiv preprint arXiv: Arxiv-2201.12886. https://arxiv.org/ abs/2201.12886.
Vaswani, Ashish, Shazeer, Noam, Parmar, Niki, Uszkoreit, Jakob, Jones, Llion, Gomez, Aidan N, Kaiser, Lukasz, and Polosukhin, Illia. (2017). Attention is All you Need. Advances in Neural Information Processing Systems. https://papers.nips.cc/paper/2017/hash/ 3f5ee243547dee91fbd053c1c4a845aa-Abstract.html.
Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, and Ruslan Salakhutdinov. (2019). Transformer Dissection: An Unified Understanding for Transformer’s Attention via the Lens of Kernel. N Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4,344–4,353. https://aclanthology. org/D19-1443/.
Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. (2021). Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021. https://ojs. aaai.org/index.php/AAAI/article/view/17325.
Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. (2021). Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting. Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021. https://proceedings.neurips. cc/paper/2021/hash/bcc0d400288793e8bdcd7c19a8ac0c2b-Abstract.html.
Bryan Lim, Sercan Ö. Arik, Nicolas Loeff, and Tomas Pfister. (2019). Temporal Fusion Transformers for Interpretable Multi-horizon Time Series Forecasting. International Journal of Forecasting, Volume 37, Issue 4, 2021, Pages 1,748-1,764. https://www.sciencedirect.com/ science/article/pii/S0169207021000637.
David Salinas, Valentin Flunkert, and Jan Gasthaus. (2017). DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks. International Journal of Forecasting, 2017. https:// www.sciencedirect.com/science/article/pii/S0169207019301888.
Taieb, S.B., Bontempi, G., Atiya, A.F., and Sorjamaa, A. (2012). A review and comparison of strategies for multi-step ahead time series forecasting based on the NN5 forecasting competition. Expert Syst. Appl., 39, 7067-7083: https://arxiv.org/pdf/1108.3259.pdf
Li Zhang, Wei-Da Zhou, Pei-Chann Chang, Ji-Wen Yang, Fan-Zhang Li. (2013). Iterated time series prediction with multiple support vector regression models. Neurocomputing, Volume 99, 2013: https://www.sciencedirect.com/science/article/pii/ S0925231212005863
Taieb, S.B. and Atiya, A.F. (2016). A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting. in IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 1, pp. 62-76, Jan. 2016: https://ieeexplore.ieee.org/document/7064712.
Main Menu
Panduan Submit Books
Template
Tols
Index By
Statistik